Showing posts with label c-dot. Show all posts
Showing posts with label c-dot. Show all posts

Thursday, 25 February 2016

Netapp Cluster Mode reading the log files using a browser

NetApp Cluster Mode reading the log files using a browser

The filer network ips and the vservers.

cluster600::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster600
            cluster_mgmt up/up    192.168.199.170/24 cluster600-01 e0a     true
cluster600-01
            mgmt1        up/up    192.168.199.171/24 cluster600-01 e0a     true
nfs600
            nfs600_lif1  up/up    192.168.199.180/24 cluster600-01 e0c     true
nfs700
            nfs700_lif1  up/up    192.168.199.181/24 cluster600-01 e0d     true
nfs800
            nfs800_nfs_lif1
                         up/up    192.168.199.182/24 cluster600-01 e0c     true
nfs_test
            nfs_test_lif1
                         up/up    192.168.199.188/24 cluster600-01 e0a     true
6 entries were displayed.


cluster600::>

We first create a user 'logger'

cluster600::> security login create -username logger -application http -authmethod password

Please enter a password for user 'logger':

Please enter it again:

Then enable the services for it.

cluster600::> vserver services web modify -vserver * -name spi -enabled true

Warning: The service 'spi' depends on: ontapi.  Enabling 'spi' will enable all of its prerequisites.
Do you want to continue? {y|n}: y
2 entries were modified.

cluster600::>


cluster600::> vserver services web access create -name spi -role admin -vserver cluster600
cluster600::> vserver services web access create -name compat  -role admin -vserver cluster600
cluster600::>

Now we can login from a browser.
https://*cluster_mgmt_ip*/spi/*nodename*/etc/log
In this example:
Cluster management IP: 192.168.199.171
Node name: cluster600-01


Thursday, 18 February 2016

NetApp clustermode snapmirror loadshare


NetApp clustermode snapmirror loadshare


By default, all client requests for access to a volume in an LS mirror set are granted read-only access. Read-write access is granted by accessing a special administrative mount point, which is the path that servers requiring read-write access into the LS mirror set must mount. All other clients will have read-only access. When you are accessing the admin share for write access, you are accessing the source volume. After changes are made to the source volume, the changes must be replicated to the rest of the
volumes in the LS mirror set using the snapmirror update-ls-set command, or with a scheduled
update.

example:
1. creating two 20MB volumes to serve as loadshare destinations
2. creating two loadshare mirrors on vserver grvsnfs1
3. initalize-ls the mirrors
4. create a schedule with a 1 minute interval
5. create two mountpoints on linux an mount the ls-mirrors
6. create a new volume in the vserver and watch the 1 minute update delay

1.
cluster600::> vol create -vserver grvsnfs1 -volume rootls -aggregate gr01_aggr1 -size 20MB -type DP
cluster600::> vol create -vserver grvsnfs1 -volume rootls1 -aggregate gr02_aggr1 -size 20MB -type DP

2.
cluster600::> snapmirror create -source-cluster gr -source-vserver grvsnfs1 -source-volume root_vol -destination-cluster gr -destination-vserver grvsnfs1 -destination-volume rootls -type ls
cluster600::> snapmirror create -source-cluster gr -source-vserver grvsnfs1 -source-volume root_vol -destination-cluster gr -destination-vserver grvsnfs1 -destination-volume rootls1 -type ls

3.
cluster600::> snapmirror initialize-ls-set -source-path gr://grvsnfs1/root_vol -foreground true

4.
cluster600::> job schedule interval create -name 1minute -minutes 1
cluster600::> snapmirror modify -destination-path kp://intdest/rootls1 -schedule 1minute

*taken that nfs is setup correctly and there are 2 lifs 1 on each of the 2 nodes
you can mount the loadshare mirror on both interfaces
lifs 192.168.4.85
192.168.4.88

5.
linux: mkdir /mnt/rootls
mkdir /mnt/rootls1
mount 192.168.4.85:/ /mnt/rootls
mount 192.168.4.88:/ /mnt/rootls1

6.
cluster600::> vol create -vserver grvsnfs1 -volume vol4 -aggregate gr02_aggr1 -size 100m -state online -type RW

cluster600::> vol mount -vserver grvsnfs1 -volume vol4 -junction-path /vol4 (volume mount)

Notice: Volume vol4 now has a mount point from volume root_vol. The load sharing (LS) mirrors of volume
root_vol are scheduled to be updated at 4/4/2013 15:39:53. Volume vol4 will not be visible in the
global namespace until the LS mirrors of volume root_vol have been updated.

linux:
while true
do
ls /mnt/rootls;ls /mnt/rootls1
sleep 10
done

*vol4 will pop up after about a minute.

Tuesday, 16 February 2016

Understanding difference between 7-mode and c-dot from a systemshell point of view

There is a huge difference in the way the Ontap shell communicates with the layer beneath.. In a 7mode all the configuration files are stored in the the /etc directory... for example: /etc/rc, /ect/exportfs, /etc/hosts..

But when it comes to cluster mode.. we have the files present.. but they are actually not really used.. Instead all the files are managed by the RDB (replicating database).. 


replication ring is a set of identical processes running on all nodes in the cluster.
The basis of clustering is the replicated database (RDB). An instance of the RDB is maintained on each node in a cluster. There are a number of processes that use the RDB to ensure consistent data across the cluster. These processes include the management application (mgmt), volume location database (vldb), virtual-interface manager (vifmgr), and SAN management daemon (bcomd).
For instance, the vldb replication ring for a given cluster consists of all instances of vldb running in the cluster.
RDB replication requires healthy cluster links among all nodes in the cluster. If the cluster network fails in whole or in part, file services can become unavailable. The cluster ring show displays the status of replication rings and can assist with troubleshooting efforts..
Lets jump into the systemshell... and from there to the bash shell and exactly see how it is working.
PLEASE DON'T TRY IT IN PRODUCTION, UNLESS YOU KNOW WHAT YOU ARE DOING.
cluster600::*> systemshell
  (system node systemshell)

Data ONTAP/amd64 (cluster600-01) (pts/2)
login: diag
Password:
Last login: Fri Feb 12 11:33:17 from localhost
Warning:  The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly.  Use this environment
only when directed to do so by support personnel.

cluster600-01%
cluster600-01%
cluster600-01% sudo bash
bash-3.2#
bash-3.2#
bash-3.2# cd /mroot/etc/cluster_config/rdb
bash-3.2# ls
Bcom            Management      VLDB            VifMgr
bash-3.2#

These are the four files that contain are contained in the RDB.. Can we change it.. Yes we can, but needs lot of understanding as how it works to do it.. I have tried doing it and it works.. but its a long process. The better option is to use the cluster command line.. that's the actual process.. 
In senarios when the there is a disaster this is the way to go.. 
In the next blog, I'll walk through the way how the junction path works and whats the difference when it comes to 7mode and c-dot
PLEASE WRITE BACK IF THIS WAS USEFUL.

Monday, 15 February 2016

IP address of a initiator in cluster-mode 8.3

To view the ip-addresses in an iSCSI session:
cluster600::qos> iscsi session show -vserver iscsi -t
Vserver: iscsi
Target Portal Group: o9oi
Target Session ID: 2
Connection ID: 1
Connection State: Full_Feature_Phase
Connection Has session: true
Logical interface: o9oi
Target Portal Group Tag: 1027
Local IP Address: 192.168.4.206
Local TCP Port: 3260
Authentication Type: none
Data Digest Enabled: false
Header Digest Enabled: false
TCP/IP Recv Size: 131400
Initiator Max Recv Data Length: 65536
Remote IP address: 192.168.4.245
Remote TCP Port: 55063
Target Max Recv Data Length: 65536

Restoring a vm from a Netapp Snapmirror (DP/XDP) destination.


1. List the available snapshots on mirrordestination.

cl1::> snap show -vserver nfs1 -volume linvolmir
—Blocks—
Vserver Volume Snapshot State Size Total% Used%
——– ——- ——————————- ——– ——– —— —–
nfs1 linvolmir
snapmirror.b640aac9-d77a-11e3-9cae-123478563412_2147484711.2015-10-13_075517 valid 2.26MB 0% 0%
VeeamSourceSnapshot_linuxvm.2015-10-13_1330 valid 0B 0% 0%
2 entries were displayed.

2. Create a flexclone.

cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
(volume clone create)
[Job 392] Job succeeded: Successful

3. Connect the correct export-policy to the new volume.

cl1::> vol modify -vserver nfs1 -volume linvol -policy data

The rest is done on the ESXi server.

4. Mount the datastore to ESXi.

~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
Connecting to NAS volume: clonedmir
clonedmir created and connected.

5. Register the VM and note the ID of the VM.

~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
174

6. Power on the VM.

~ # vim-cmd vmsvc/power.on 174
Powering on VM:

7. Your prompt will not return until you answer the question about moving or copying.
Open a new session to ESXi, and list the question.

~ # vim-cmd vmsvc/message 174
Virtual machine message _vmx1:
msg.uuid.altered:This virtual machine might have been moved or copied.
In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied.

If you don’t know, answer “I copied it”.
0. Cancel (Cancel)
1. I moved it (I moved it)
2. I copied it (I copied it) [default]
Answer the question.
The VMID is “174″, the MessageID is “_vmx1″ and the answer to the question is “1″

~ # vim-cmd vmsvc/message 174 _vmx1 1

Now the VM is started fully.

Just the commands

cl1::> cl1::> snap show -vserver nfs1 -volume linvolmir
cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
cl1::> vol modify -vserver nfs1 -volume linvol -policy data
~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
~ # vim-cmd vmsvc/power.on 174
~ # vim-cmd vmsvc/message 174
~ # vim-cmd vmsvc/message 174 _vmx1 1

Let me know if you have any questions.. Thanks.

Sunday, 14 February 2016

Netapp monitoring solution from scratch




I created a netapp monitoring solution on a UNIX machine... and have uploaded the static files to a free php server for so that you understand how it looks..
As its a file server where the files are uploaded.. It is not exactly the one as on the UNIX machine.. but you could still get a feel of it.

Advantages:
  • No third-party tools used like Nagios or PRTG or any other,
  • Build from scratch.
  • Fully customization.
  • Any required parameters can be monitored.
  • Complete look and feel can be changed as per user needs
  • Easy to use.



If anyone is interested to know more about it, feel free to drop me a message.

Example Link:
This is for cluster mode.. I have done similar one for 7-mode as well... and in the live solution it monitors even the NFS and CIFS load from hosts connected to filer.. and also the detailed host load analysis.
nfs600_vol1 is the only volume which was used to write data from a server. So you would see load only on this volume.. This is a test netapp simulator which I have used in this solution.

http://anirvan_lahiri.net23.net/monitoring/monitor.php

Friday, 12 February 2016

NetApp Clustermode Qtree Quota


1. vol create -vserver vs-nfs -volume nfsvol3 -aggregate aggrn1 300m
2. vol mount -vserver vs-nfs -volume nfsvol3 -junction-path /nfsvol3
3. export-policy create -vserver vs-nfs -policyname nfspol
4. export-policy rule create -vserver vs-nfs -policyname nfspol -clientmatch -rorule any -rwrule any -superuser any
5. qtree create -vserver vs-nfs -volume nfsvol3 -qtree q1
6. quota policy create -vserver vs-nfs -volume nfsvol3 -name nfs3pol
7. quota policy rule create -vserver vs-nfs -policy-name nfs3pol -volume nfsvol3 -type tree -target q1 disk-limit 20m

Complete NetApp monitoring via UNIX scripting

Not sure if any one is interested..
I have made a solution which monitors complete NetApp (7mode and C-dot)
No third party tools used.
Only you need is a UNIX machine and thats it.. completely customization, the parameters, the section the logo and the logo on each graph also can be changed.
Anything measurable in netapp can be monitored based on requirement.

If you are interested to know more, feel free to write back. Thanks.


Some Screenshots for your reference (this is for clustermode 8.2):



NetApp map root user..



vservername : v3
cl1::vserver export-policy rule*> vserver services unix-user create -vserver v3 -user pcuser -id 65534 -primary-gid 65534
cl1::vserver export-policy rule*> vserver service unix-user create -vserver v3 -user root -id 0 -primary-gid 1
cl1::vserver export-policy rule*> vserver services unix-user create -vserver v3 -user administrator -id 10 -primary-gid 0
cl1::vserver export-policy rule*> vserver service unix-user show -vserver v3
User User Group Full
Vserver Name ID ID Name
————– ————— —— —— ——————————–
v3 administrator 10 0 -
v3 pcuser 65534 65534 -
v3 root 0 1 -
3 entries were displayed.
cl1::vserver export-policy rule*> vserver services unix-group create -vserver v3 -name root 0
cl1::vserver export-policy rule*> vserver services unix-
unix-group unix-user
cl1::vserver export-policy rule*> vserver services unix-group create -vserver v3 -name daemon 1
cl1::vserver export-policy rule*> vserver services unix-group show -vserver v3
Vserver Name ID
————– ——————- ———-
v3 daemon 1
v3 root 0
2 entries were displayed.
cl1::vserver export-policy rule*> vserver name-mapping create -vserver v3 -direction win-unix -position 1 -pattern netapp\\administrator -replacement root
cl1::vserver export-policy rule*> vserver name-mapping create -vserver v3 -direction unix-win -position 1 -pattern root -replacement netapp\\administrator
cl1::vserver export-policy rule*> vserver name-mapping show
Vserver Direction Position
————– ——— ——–
v3 win-unix 1 Pattern: netapp\\administrator
Replacement: root
v3 unix-win 1 Pattern: root
Replacement: netapp\\administrator
2 entries were displayed

NetApp Cluster Mode SFO.





In Cluster Mode, when a failover or takeover has taken place, the root-aggregate
of the partnernode is owned by the surviving partner.
How to get to the rootvolume of the partner’s root-aggregate?
1. Log in to she systemshell.
2. Run the command ‘mount_partner
PS: The root-volume of the partner is then mounted on /partner

Understanding NetApp Cluster mode LIFS

1. create a lif
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up
2. go to diag mode
set diag
3. view the owner of the new lif and delete the owner of the new lif
net int ids show -owner nfs1
net int ids delete -owner nfs1 -name tester
net int ids show -owner nfs1
4. run net int show and see that the lif is not there.
net int show
5. try to create the same lif again.
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up
(this will fail because the lif is still there, but has no owner)
6. debug the vifmgr table
debug smdb table vifmgr_virtual_interface show -role data -fields lif-name,lif-id
(this will show you the node, the lif-id and the lif-name)
7. using the lif-id from the previous output, delete the lif entry.
debug smdb table vifmgr_virtual_interface delete -node cl1-01 -lif-id 1030
8. see that the lif is gone.
debug smdb table vifmgr_virtual_interface show -role data -fields lif-name,lif-id
9. create the lif.
net int create -vserver nfs1 -lif tester -role data -data-protocol cifs,nfs,fcache -home-node cl1-01 -home-port e0c -address 1.1.1.1 -netmask 255.0.0.0 -status-admin up

Cluster Mode mhost troubleshooting

If you need any help, feel free to ask.
1. go to the systemshell
set diag
systemshell -node cl1-01
2. unmount mroot
cd /etc
./netapp_mroot_unmount
logout
3. run cluster show a couple of times and see that health is false
cluster show
4. run cluster ring show to see that M-host is offline
cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
——— ——– ——– ——– ——– ——— ———
cl1-01 mgmt 6 6 699 cl1-01 master
cl1-01 vldb 7 7 84 cl1-01 master
cl1-01 vifmgr 9 9 20 cl1-01 master
cl1-01 bcomd 7 7 22 cl1-01 master
cl1-02 mgmt 0 6 692 – offline
cl1-02 vldb 7 7 84 cl1-01 secondary
cl1-02 vifmgr 9 9 20 cl1-01 secondary
cl1-02 bcomd 7 7 22 cl1-01 secondary
5. try to create a volume and see that the status of the aggregate
cannot be determined if you pick the aggregate from the broken M-host.
6. now vldb will also be offline.
5. remount mroot by starting mgwd from the systemshell
set diag
systemshell -node cl1-01
/sbin/mgwd -z &
7. when you run cluster ring show it should show vldb offline
cl1::*> cluster ring show
Node UnitName Epoch DB Epoch DB Trnxs Master Online
——— ——– ——– ——– ——– ——— ———
cl1-01 mgmt 6 6 738 cl1-01 master
cl1-01 vldb 7 7 87 cl1-01 master
cl1-01 vifmgr 9 9 24 cl1-01 master
cl1-01 bcomd 7 7 22 cl1-01 master
cl1-02 mgmt 6 6 738 cl1-01 secondary
cl1-02 vldb 0 7 84 – offline
cl1-02 vifmgr 0 9 20 – offline
cl1-02 bcomd 7 7 22 cl1-01 secondary
Watch vifmgr has gone bad as well.
8. start vldb by running spmctl -s -h vldb
or run /sbin/vldb
in this case, do the same for vifmgr.
Please leave your comments, that would be helpful.

cluster netapp convert snapmirror to snapvault


Steps:
  • Break the data protection mirror relationship by using the snapmirror break command.
  • The relationship is broken and the disaster protection volume becomes a read-write volume.
  • Delete the existing data protection mirror relationship, if one exists, by using the snapmirror delete command.
  • Remove the relationship information from the source SVM by using the snapmirror release command.
  • (This also deletes the Data ONTAP created Snapshot copies from the source volume.)
  • Create a SnapVault relationship between the primary volume and the read-write volume by using the snapmirror create command with the -type XDP parameter.
  • Convert the destination volume from a read-write volume to a SnapVault volume and establish the SnapVault relationship by using the snapmirror resync command.
  • (Warning: all data newer than the snapmirror.xxxxxx snapshot copy will be lost and also: the snapvault destination should
  • not be the source of another snapvault relationship)


Please leave your comments.. that would be helpful.

systemshell Commands for NetApp Cluster mode

ONLY USE IF YOU KNOW WHAT YOU ARE DOING
spmctl, rdb_dump, kenv
cl1-01% spmctlList of managed processes. (Error=0)
Exec=/usr/sbin/ucoreman –log;Handle=069264cf-377e-4f06-84c2-05fc01b4ee5f;Pid=906;State=Running
Exec=/usr/bin/raid_lm;Handle=0a3fbbf9-da62-4104-8e57-846977ad6a3c;Pid=1922;State=Running
Exec=/sbin/schmd;Handle=1120522e-9e6f-4b75-b67d-e7a8684ec164;Pid=1658;State=Running
Exec=/sbin/vifmgr -n;Handle=17bde8b6-7fbb-4157-9e70-5365ac1e6e58;Pid=1797;State=Running
Exec=/usr/sbin/mhostexecd -D;Handle=2d16144e-d733-49bb-a94b-cb40de81f205;Pid=1844;State=Running
Exec=/sbin/nchmd;Handle=3909a2b9-37c7-4009-bf7b-914892d4af8d;Pid=1662;State=Running
Exec=/usr/sbin/time_state -l;Handle=51329b28-5a89-4e7a-8786-ae17ea355089;Pid=902;State=Running
Exec=/sbin/notifyd -n;Handle=56548532-c334-4633-8cd8-77ef97682d3d;Pid=829;State=Running
Exec=/sbin/bcomd;Handle=7feb285e-aba3-44fb-8f3b-daadb7e702b9;Pid=1801;State=Running
Exec=/usr/sbin/httpdmgr;Handle=86fadb47-8fcd-47c4-bf16-98dc7d10e416;Pid=1621;State=Running
Exec=/sbin/coresegd -m;Handle=99e1bb86-fce4-413e-b66b-21348215d253;Pid=1683;State=Running
Exec=/sbin/shmd;Handle=a534ba91-c0d1-44a6-b061-e280ba8b62e5;Pid=1654;State=Running
Exec=/sbin/mntsvc -n;Handle=a71c0340-e8e1-4bab-a7f6-25ae97d4da5b;Pid=1816;State=Running
Exec=/sbin/ndmpd;Handle=c0c9ac97-8eb1-4d8b-a1c9-15546c4b464d;Pid=1645;State=Running
Exec=/sbin/cmd;Handle=c2b7cc49-d650-4cfe-b167-95c946ca0abb;Pid=1803;State=Running
Exec=/sbin/mgwd -z;Handle=c6b932ea-acc2-46f4-a8e7-83774fb03d99;Pid=939;State=Running
Exec=/usr/sbin/named -f -c /tmp/named.conf -S 1024;Handle=ccb0609e-d26e-40a9-9e40-15569206ae3a;Pid=1789;State=Running
Exec=/sbin/vldb -n;Handle=cd4f8a3c-126a-4895-bfb9-5b67fdbea320;Pid=1799;State=Running
Exec=/sbin/mdnsd -z;Handle=cd9fc4fa-4f45-4d09-9f02-03a193229618;Pid=1672;State=Running
Exec=/usr/bin/sktlogd -m;Handle=df663bec-40ed-4278-9d54-220a52d905b9;Pid=1813;State=Running
Exec=/sbin/secd;Handle=fe60b0f2-1c9b-4e32-bfae-640fc9c36310;Pid=1778;State=Running
cl1-01% rdb_dump
Local time Wed Dec 12 15:58:47 2012
RDB Unit “Management” (id 1) on host “cl1-01″ (site 1000)
At Wed Dec 12 15:58:47 2012.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <2,220>
Online Status:
Local 1000 is Master (epoch: 2, master: 1000)
1. id 1000, state: online *** Master (local)
RDB Unit “VifMgr” (id 2) on host “cl1-01″ (site 1000)
At Wed Dec 12 15:58:47 2012.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <2,20>
Online Status:
Local 1000 is Master (epoch: 2, master: 1000)
1. id 1000, state: online *** Master (local)
RDB Unit “VLDB” (id 0) on host “cl1-01″ (site 1000)
At Wed Dec 12 15:58:47 2012.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <2,69>
Online Status:
Local 1000 is Master (epoch: 2, master: 1000)
1. id 1000, state: online *** Master (local)
RDB Unit “Bcom” (id 5) on host “cl1-01″ (site 1000)
At Wed Dec 12 15:58:47 2012.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <2,13>
Online Status:
Local 1000 is Master (epoch: 2, master: 1000)
1. id 1000, state: online *** Master (local)
————-
cl1-01% kenvAUTOBOOT=”true”
BIOS_VERSION=”245″
BOARDNAME=”VMWARE”
BOOTED_FROM=”PRIMARY”
BOOT_FILE=”x86_64/freebsd/image1/kernel”
LINES=”24″
LOADER_VERSION=”1.0″
MOBO_REV=”ZZ”
MOBO_SERIAL_NUM=”999999″
NETAPP_BACKUP_KERNEL_URL=”x86_64/freebsd/image2/kernel”
NETAPP_PRIMARY_KERNEL_URL=”x86_64/freebsd/image1/kernel”
SYS_MODEL=”SIMBOX”
SYS_REV=”ZZ”
SYS_SERIAL_NUM=”4061490-31-8″
acpi_load=”YES”
bootarg.bsdportname=”e0c”
bootarg.dblade.wafl_use_delete_log=”false”
bootarg.init.boot_clustered=”true”
bootarg.init.cfdevice=”/dev/ad0s2″
bootarg.init.clearvarfsnvram=”false”
bootarg.init.dhcp.disable=”true”
bootarg.mgwd.autoconf.disable=”true”
bootarg.mgwd.scsi_blade_uuid=”8231ca39-4371-11e2-b1a0-31d956941ea5″
bootarg.new_varfs=”false”
bootarg.nvram.sysid=”4061490318″
bootarg.setup.auto.internal=”true”
bootarg.sim=”true”
bootarg.sim.vardev=”/dev/ad1s1″
bootarg.sim.vdev=”/dev/ad3″
bootarg.sim.vdevinit=”false”
bootarg.srm.disk.san_reservations=”true”
bootarg.srm.nvram.setup_sim=”true”
bootarg.srm.nvram.vnvram=”true”
bootarg.vm=”true”
bootarg.vm.data_diskmodel=”vha”
bootarg.vm.no_poweroff_on_halt=”true”
bootarg.vm.rapidsavecore=”false”
bootarg.vm.run_vmtools=”true”
bootarg.vm.sim=”true”
bootarg.vm.sim.vdev=”/dev/ad3″
bootarg.vm.sim.vdevinit=”false”
bootarg.vm.sys_diskmodel=”standard”
bootarg.vm.vardev=”/dev/ad1s1″
bootarg.vm.varfs=”false”
bootarg.vnvram.size=”32″
bootfile=”kernel”
comconsole_speed=”9600″
console=”vidconsole”
currdev=”disk1s2″
fc-ports-valid=”true”
fmmbx-lkg-0=”ED993E2811E2439134129CAD123456780000008C000000004154454E20205050312D44564D3030305A462D423032352D353838313030353200000000000000004154454E20205050312D44564D3030305A462D423032352D35383831313035320000000000000000″
fmmbx-lkg-0b=”ED993E2811E2439134129CAD123456780000008C000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000″
interpret=”OK”
kernel=”kernel”
kernel_options=”"
kernelname=”x86_64/freebsd/image1/kernel”
last-OS-booted-raid-ver=”11″
last-OS-booted-ver=”8.1.1X34″
last-OS-booted-wafl-ver=”22331″
loader_conf_files=”/boot/device.hints /boot/loader.conf /boot/loader.conf.local”
mac_ifoff=”NO”
module_path=”/boot/modules”
ntap.init.cfdevice=”/dev/ad0s2″
nvram_discard=”false”
nvram_emulation=”true”
prompt=”VLOADER>”
setvsimenv=”true”
smbios.bios.reldate=”07/02/2012″
smbios.bios.vendor=”Phoenix Technologies LTD”
smbios.bios.version=”6.00″
smbios.chassis.maker=”No Enclosure”
smbios.chassis.serial=”None”
smbios.chassis.tag=”No Asset Tag”
smbios.chassis.version=”N/A”
smbios.planar.maker=”Intel Corporation”
smbios.planar.product=”440BX Desktop Reference Platform”
smbios.planar.serial=”None”
smbios.planar.version=”None”
smbios.socket.enabled=”2″
smbios.socket.populated=”2″
smbios.system.maker=”VMware, Inc.”
smbios.system.product=”VMware Virtual Platform”
smbios.system.serial=”VMware-56 4d fb 8e 66 a8 07 a9-62 a9 a6 2e 7f 51 41 45″
smbios.system.uuid=”564dfb8e-66a8-07a9-62a9-a62e7f514145″
smbios.system.version=”None”
sysvar.system_id=”0″
wafl-disable-mbuf-backed-buffers?=”TRUE”
hint.vga.0.at=”isa”
hint.sc.0.at=”isa”
hint.sc.0.flags=”0×100″
hint.atkbdc.0.at=”isa”
hint.atkbdc.0.port=”0×060″
hint.atkbd.0.at=”atkbdc”
hint.atkbd.0.irq=”1″
hint.uart.0.at=”isa”
hint.uart.0.port=”0x3F8″
hint.uart.0.flags=”0×40″
hint.uart.0.irq=”4″
hint.uart.1.at=”isa”
hint.uart.1.port=”0x2F8″
hint.uart.1.flags=”0×20″
hint.uart.1.irq=”3″
bootarg.init.booterr=”0″
bootarg.init.cf_mounted=”true”
bootarg.init.rootimage=”/cfcard/x86_64/freebsd/image1/rootfs.img”
bootarg.init.cfimagebase=”/cfcard/x86_64/freebsd”
ntap.init.kernelname=”/cfcard/x86_64/freebsd/image1/kernel”
bootarg.init.defaultimage=”image1″
bootarg.init.kldload_mem_size=”200413184″
bootarg.init.last_boot=”true”
bootarg.notifyd.optout=”false”
sysvar.ngsh_remote_ip=”"
bootarg.mgwd.nblade_uuid=”a13ce2d2-4371-11e2-8bf5-f33e01db9e8d”
bootarg.mgwd.spinvfs_uuid=”a13d1922-4371-11e2-9461-6d02405690a9″
bootarg.bootmenu.root_dsid=”1″
bootarg.bootmenu.root_msid=”2147483649″
bootarg.bootmenu.root_uuid=”ee6ee1f3-4391-11e2-ad9c-123478563412″
bootarg.bootmenu.node_uuid=”a13d3f5f-4371-11e2-8c70-fdb3cc539f73″
bootarg.prevent_sendhome=”false”
bootarg.init.boot_mode=”normal”
bootarg.dblade.root_volume.local_dsid=”0x723a34a9″
bootarg.dblade.root_volume.local_uuid=”ee6ee1f3-4391-11e2-ad9c-123478563412″
bootarg.dblade.root_volume.local_name=”vol0″
bootarg.dblade.root_volume.local_aggr_uuid=”ec807eda-4391-11e2-ad9c-123478563412″
bootarg.dblade.root_volume.local_event=”NORMAL”
bootarg.dblade.root_volume.local_size=”808_MB”
bootarg.dblade.root_volume.local_space=”691_MB”
REBOOT_REASON=”REBOOT_UNKNOWN”
bootarg.mgwd.mroot_found=”true”
sysvar.init.populate_mroot_late_done=”true”
bootarg.mgwd.cluster_uuid=”2770419f-438c-11e2-ad9d-123478563412″
bootarg.mgwd.cluster_name=”cl1″
bootarg.mgwd.booted=”true”
bootarg.mgwd.movestatus.sampling_int=”20″
bootarg.mgwd.movestatus.cleanup_int=”288″
bootarg.mgwd.movestatus.cleanup_now=”0″
cl1-01%

Clustermode mroot destroyed


HA-PAIR.
Logged into the systemshell and completely emptied mroot on both nodes.
rebooted.
During boot, an mroot.tgz is extracted from /tmp and the new mroot is
regenerated.
You will have to set a new root password (boot menu) and diag password (security login password).


Please leave your comments for me to improve

clustermode recovery mroot (only use if you know what you are doing)

the backups are in /mroot/etc/backups/config
the rdb environment is in /mroot/etc/cluster_config/
I tarred the db’s in /mroot/etc/cluster_config/tarred
on the surviving node, and copied that to a third location.
The db’s were not restored by the backup and the directories
remain empty.
I copy the tar file from the third location to the cluster_config
directory and untarred. Reboot and node functions properly again.
1. on surviving node:
cd /mroot/etc/cluster_config/rdb
tar cf tarred .
scp tarred root@192.168.1.159:/tmp/
2. on broken node:
boot system and login…
kp-01::system configuration*> backup show
Node Backup Tarball Time Size
——— —————————————– —————— —–
kp-01 kp-01.daily.2015-04-06.00_10_00.7z 04/06 00:10:00 4.66MB
kp-01 kp-01.daily.2015-04-07.00_10_01.7z 04/07 00:10:01 4.18MB
kp-01 kp-02.daily.2015-04-06.00_10_00.7z 04/06 00:10:00 3.27MB
kp-01 kp-02.daily.2015-04-07.00_10_01.7z 04/07 00:10:01 3.21MB
kp-01 kp.8hour.2015-04-06.18_15_00.7z 04/06 18:15:00 7.35MB
kp-01 kp.8hour.2015-04-07.02_15_04.7z 04/07 02:15:04 7.68MB
kp-01 kp.8hour.2015-04-07.10_15_00.7z 04/07 10:15:00 7.97MB
kp-01 kp.daily.2015-04-05.00_10_00.7z 04/05 00:10:00 7.21MB
kp-01 kp.daily.2015-04-06.00_10_00.7z 04/06 00:10:00 8.11MB
kp-01 kp.daily.2015-04-07.00_10_01.7z 04/07 00:10:01 7.56MB
kp-01 kp.weekly.2015-03-11.10_08_12.7z 03/11 10:08:12 3.26MB
kp-01 kp.weekly.2015-03-17.00_15_00.7z 03/17 00:15:00 6.06MB
kp-01 kp.weekly.2015-04-07.00_15_06.7z 04/07 00:15:06 7.57MB
kp-01::system*> configuration recovery node restore -backup kp.8hour.2013-04-07.02_15_04.7z
3. on broken node
cd /mroot/etc/cluster_config/rdb
scp root@192.168.1.159:/tmp/tarred .
tar xf tarred
sudo reboot

Cluster mode Snapmirror

Source Volume and Destination Volume should have the same Language!
Source Volume is RW type. Destination Volume is DP type.

1. create a schedule (to be used for the updates of the vault relationship)
job schedule cron create -name midday -hour 11 -minute 0
2. create a policy for source vserver to keep snapshots of the volume
snapshot policy create -vserver vs1 -policy vault -enabled true -schedule1 daily -count1 6 -prefix1 daily -snapmirror-label1 daily
3. create a policy for destination vserver
snapmirror policy create -vserver vs2 -policy vs2-vault-policy
4. add a rule to the destination policy to specify the retention
snapmirror policy add-rule -vserver vs2 -policy vs2-vault-policy -snapmirror-label daily -keep 20
5. create peer relationship
vserver peer create -vserver vs1 -peer-vserver vs2 -applications snapmirror
6. create a snapmirror relation of type XDP
snapmirror create -source-path vs1:vol1 -destination-path vs2:vol12 -type XDP -policy vs2-vault-policy -schedule midday
7. initialize
snapmirror initialize -destination-path vs2:vol12
8. run show
snapmirror show


Please leave your comments.. That would be helpful, Thanks for reading.

Cluster mode Netapp systemshell

Cluster mode runs on Bash shell.. which means we have the bash shell on that we have the next generation shell ngsh (NetApp calls it like that)


  • Log in to system shell.
  • Run sudo bash
  • Then type ngsh
  • you would launch the cluster shell.
  • to confirm, do an exit.
  • You would be routed back to system shell.

cluster600::*> systemshell
  (system node systemshell)

Data ONTAP/amd64 (cluster600-01) (pts/2)

login: diag
Password:
Last login: Wed Feb 10 13:51:44 from localhost


Warning:  The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly.  Use this environment
only when directed to do so by support personnel.

cluster600-01%
cluster600-01%
cluster600-01% sudo bash
bash-3.2#
bash-3.2# now we are in the system shell
bash-3.2#
bash-3.2#
bash-3.2# ngsh
cluster600::> exit
Goodbye


bash-3.2# exit
exit
cluster600-01% exit
logout

cluster600::*>



Please leave your comments.. and if there is anything I can add, feel free to ask.

Featured post

Netapp monitoring solution from scratch

I created a netapp monitoring solution on a UNIX machine... and have uploaded the static files to a free php server for so that you under...