Thursday 25 February 2016

NetApp Systems Manager error 500 permission denied: connect

NetApp Systems Manager error 500 permission denied: connect

This is a very common problem for people who use systems manager:






The trick to fix it is first enable the httpd access.

netapp01> options httpd.admin.enable on

and then try checking.. you will most probably have a warning message this time.




Now enable the tls. Based on the version you might need to run it in advance mode.

netapp01> options tls.enable on

This should fix the issue and you should be able to login to netapp via the systems manager seamlessly.







Netapp Cluster Mode reading the log files using a browser

NetApp Cluster Mode reading the log files using a browser

The filer network ips and the vservers.

cluster600::> net int show
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster600
            cluster_mgmt up/up    192.168.199.170/24 cluster600-01 e0a     true
cluster600-01
            mgmt1        up/up    192.168.199.171/24 cluster600-01 e0a     true
nfs600
            nfs600_lif1  up/up    192.168.199.180/24 cluster600-01 e0c     true
nfs700
            nfs700_lif1  up/up    192.168.199.181/24 cluster600-01 e0d     true
nfs800
            nfs800_nfs_lif1
                         up/up    192.168.199.182/24 cluster600-01 e0c     true
nfs_test
            nfs_test_lif1
                         up/up    192.168.199.188/24 cluster600-01 e0a     true
6 entries were displayed.


cluster600::>

We first create a user 'logger'

cluster600::> security login create -username logger -application http -authmethod password

Please enter a password for user 'logger':

Please enter it again:

Then enable the services for it.

cluster600::> vserver services web modify -vserver * -name spi -enabled true

Warning: The service 'spi' depends on: ontapi.  Enabling 'spi' will enable all of its prerequisites.
Do you want to continue? {y|n}: y
2 entries were modified.

cluster600::>


cluster600::> vserver services web access create -name spi -role admin -vserver cluster600
cluster600::> vserver services web access create -name compat  -role admin -vserver cluster600
cluster600::>

Now we can login from a browser.
https://*cluster_mgmt_ip*/spi/*nodename*/etc/log
In this example:
Cluster management IP: 192.168.199.171
Node name: cluster600-01


Tuesday 23 February 2016

Cracking the NetApp 7 mode systemshell (c-shell) part 1

This was a request from one of the reader of my blog for deep dive into 7mode.

Logging into system shell:

netapp01>
netapp01> priv set diag
Warning: These diagnostic commands are for use by NetApp
         personnel only.
netapp01*> systemshell

Data ONTAP/amd64 (netapp01) (pts/0)

login: diag
Password:
Last login: Tue Feb 23 13:17:04 from localhost


Warning:  The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly.  Use this environment
only when directed to do so by support personnel.

netapp01% 

For some reason, the hidden diagnostic user is named diaguser but invoked as diag
When we login to the systemshell, we are into a C Shell (csh) with a user id of 1002 and a home directory of /var/home/diag. Some useful aliases for your reference:
bash-3.2# exit
netapp01% alias
h       (history 25)
j       (jobs -l)
la      (ls -a)
lf      (ls -FA)
ll      (ls -lA)

Unfortunately logging into systemshell as user diag does not provide you with root privileges.
So how do you become root? Quite easily as it turns out. The Bash shell exists at /usr/bin/bash and is owned by root. So invoking sudo bash changes your id to 0, i.e. root,. Note that no man pages are available in either of these shells.
By the way, you could also have entered sudo /bin/sh to instead use a Bourne shell, but then you would not have command completion or command history.
Here is the contents of /etc/sudoers:
netapp01%
netapp01% sudo bash
bash-3.2#
bash-3.2#

bash-3.2# cat /etc/sudoers
# sudoers file.
#
# This file MUST be edited with the 'visudo' command as root.
# Failure to use 'visudo' may result in syntax or file permission errors
# that prevent sudo from running.
#
# See the sudoers man page for the details on how to write a sudoers file.
#

# Host alias specification

# User alias specification

# Cmnd alias specification

# Defaults specification
# Uncomment if needed to preserve environmental variables related to the
# FreeBSD pkg_* utilities.
#Defaults       env_keep += "PKG_PATH PKG_DBDIR PKG_TMPDIR TMPDIR PACKAGEROOT PACKAGESITE PKGDIR"

# Uncomment if needed to preserve environmental variables related to
# portupgrade. (portupgrade uses some of the same variables as the pkg_*
# tools so their Defaults above should be uncommented if needed too.)
#Defaults       env_keep += "PORTSDIR PORTS_INDEX PORTS_DBDIR PACKAGES PKGTOOLS_CONF"

# Runas alias specification

# User privilege specification
root    ALL=(ALL) ALL
diag    ALL=(ALL) NOPASSWD: ALL

# Uncomment to allow people in group wheel to run all commands
# %wheel        ALL=(ALL) ALL

# Same thing without a password
# %wheel        ALL=(ALL) NOPASSWD: ALL

# Samples
# %users  ALL=/sbin/mount /cdrom,/sbin/umount /cdrom
# %users  localhost=/sbin/shutdown -h now
bash-3.2#

Nothing special.. just that the diag user gains root privileges without entering any password. where as root user needs to enter it.
Now, where is the real password file? Turns out that it is in /var/etc.
bash-3.2# cd /var/etc/
bash-3.2# ls
bootargs                ipf6.user.rules         periodic.conf.local
dhclient-enter-hooks    localtime               php.ini
dhclient.conf           master.passwd           pwd.db
fstab                   motd                    rc.conf
group                   ndmpd.conf              resolv.conf
host.conf               newsyslog.conf          spwd.db
hosts                   nsmb.conf               ssh
httpd-custom.conf       nsswitch.conf           sysctl.conf
httpd-custom.conf.old   ntp.conf                ttys
httpd-vserver.conf      opieaccess              ttys.old
inetd.conf              passwd                  vsa_vsphere_config
ipf.user.rules          periodic.conf
bash-3.2#

And here is the content of the password file as obtained by vipw:

# $FreeBSD$
#
root:$1$9f58c0d6$NcokQbZbvosXgi2G/EQ2L.:0:0::0:0:Charlie &:/root:/usr/sbin/nolog
in
toor:*:0:0::0:0:Bourne-again Superuser:/root:
daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin
operator:*:2:5::0:0:System &:/:/usr/sbin/nologin
bin:*:3:7::0:0:Binaries Commands and Source:/:/usr/sbin/nologin
tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin
kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin
games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin
news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin
man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin
sshd:*:22:22::0:0:Secure Shell Daemon:/var/empty:/usr/sbin/nologin
smmsp:*:25:25::0:0:Sendmail Submission User:/var/spool/clientmqueue:/usr/sbin/no
login
mailnull:*:26:26::0:0:Sendmail Default User:/var/spool/mqueue:/usr/sbin/nologin
bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin
proxy:*:62:62::0:0:Packet Filter pseudo-user:/nonexistent:/usr/sbin/nologin
_pflogd:*:64:64::0:0:pflogd privsep user:/var/empty:/usr/sbin/nologin
_dhcp:*:65:65::0:0:dhcp programs:/var/empty:/usr/sbin/nologin
uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/local/libexec/uucp
/uucico

the complete ontap backup is stored in 
bash-3.2# cd /cfcard/
bash-3.2# ls
BOOT_SEQ        cores           env             env_bak         x86_64

the mount details:

bash-3.2# mount
/dev/md0 on / (ufs, local, read-only)
devfs on /dev (devfs, local)
/dev/ad0s2 on /cfcard (msdosfs, local)
/dev/md1.uzip on / (ufs, local, read-only, union)
/dev/md2.uzip on /platform (ufs, local, read-only)
/dev/ad3 on /sim (ufs, local, noclusterr, noclusterw)
/dev/ad1s1 on /var (ufs, local, synchronous)
procfs on /proc (procfs, local)
/dev/md3 on /tmp (ufs, local, soft-updates)
localhost:0x80000000,0xef341a80 on /mroot (spin)
clusfs on /clus (clusfs, local)

All the configuration files are stored in:
bash-3.2# cd /mroot/etc
bash-3.2# ls
.avail                  firmware                registry
.mroot.cksum            group                   registry.0
.mroot_late.cksum       hba_fw                  registry.1
.pmroot.cksum           hosts                   registry.bck
.pmroot_late.cksum      hosts.bak               registry.default
.rotate_complete        hosts.equiv             registry.lastgood
.zapi                   hosts.equiv.bak         registry.local
acpp_fw                 http                    registry.local.0
asup_content.conf       initial_varfs.tgz       registry.local.1
backups                 keymgr                  registry.local.bck
cifs_homedir.cfg        lang                    rmtab
cifs_nbalias.cfg        lclgroups.bak           serialnum
cifsconfig_setup.cfg    lclgroups.cfg           services
cifsconfig_share.cfg    log                     shelf_fw
cifssec.cfg             man                     sldiag
clihelp                 messages                sm
cluster_config          messages.0              snmppersist.conf
configs                 mib                     sshd
crash                   mlnx                    stats
dgateways               mlog                    sysconfigtab
dgateways.bak           netapp_filer.dtd        syslog.conf.sample
disk_fw                 nsswitch.conf           tape_config
entropy                 nsswitch.conf.bak       usermap.cfg
entropy-file            oldvarfs.tgz            varfs.tgz
exports                 ontapAuditE.dll         vfiler
exports.bak             passwd                  vserver_4294967295
exports.old             quotas                  www
exports_arc             raid                    zoneinfo
filersid.cfg            rc
bash-3.2#
bash-3.2#
bash-3.2#

Will continue will lot more stuff in next part of the same blog, till then stay tuned.. Don't forget to share it.. 

Different ways of creating a vserver in c-mode / also added how to delete a vserver.

OPTIONS:

  • systems manager (not illustrated in the example)
  • vserver setup
  • vserver create

The major difference between these two are the allocation of Junction path. when we create a vserver via vserver setup, we can choose the junction path to be /vol/vol_name

But when we use vserver create, we don't get the option. The junction path is /vol_name
But don't worry, we can for sure change it in the later stage or tweak it as we want. But lets see how the creation and deletion works.




CREATING VSERVER with vserver setup COMMAND

The next option is to use a vserver setup command. The advantage of this is that it takes care of all the related things, like the junction path, lifs and data vols.

below is the example to show how it works.
cluster600::> vserver setup
Welcome to the Vserver Setup Wizard, which will lead you through
the steps to create a virtual storage server that serves data to clients.

You can enter the following commands at any time:
"help" or "?" if you want to have a question clarified,
"back" if you want to change your answers to previous questions, and
"exit" if you want to quit the Vserver Setup Wizard. Any changes
you made before typing "exit" will be applied.

You can restart the Vserver Setup Wizard by typing "vserver setup". To accept a default
or omit a question, do not enter a value.

Vserver Setup wizard creates and configures only data Vservers.
If you want to create a Vserver with Infinite Volume use the vserver create command.


Step 1. Create a Vserver.
You can type "back", "exit", or "help" at any question.

Enter the Vserver name: nfs_test
Choose the Vserver data protocols to be configured {nfs, cifs, fcp, iscsi, ndmp}: nfs
Choose the Vserver client services to be configured {ldap, nis, dns}:
Enter the Vserver's root volume aggregate [aggr1]:
Enter the Vserver language setting, or "help" to see all languages [C.UTF-8]:
Enter the Vserver root volume's security style {mixed, ntfs, unix} [unix]:
Vserver creation might take some time to finish....

Vserver nfs_test with language set to C.UTF-8 created.  The permitted protocols are nfs.

Step 2: Create a data volume
You can type "back", "exit", or "help" at any question.

Do you want to create a data volume? {yes, no} [yes]:
Enter the volume name [vol1]: nfs_test_vol1
Enter the name of the aggregate to contain this volume [aggr1]:
Enter the volume size: 100m
Enter the volume junction path [/vol/nfs_test_vol1]:
It can take up to a minute to create a volume...


Volume nfs_test_vol1 of size 100MB created on aggregate aggr1 successfully.
Do you want to create an additional data volume? {yes, no} [no]: no


Step 3: Create a logical interface.
You can type "back", "exit", or "help" at any question.

Do you want to create a logical interface? {yes, no} [yes]:
Enter the LIF name [lif1]: nfs_test_lif1
Which protocols can use this interface {nfs, cifs, iscsi}: nfs, cifs

Error: Input contains a protocol disallowed on the Vserver. Allowed protocols are nfs.

Enter the list of storage protocols that will be used on this LIF.

You can type "back", "exit", or "help" at any question.

Which protocols can use this interface {nfs, cifs, iscsi} [nfs,cifs]: nfs
Enter the home node [cluster600-01]:
Enter the home port {e0a, e0b, e0c, e0d} [e0a]:
Enter the IP address: 192.168.199.188
Enter the network mask: 255.255.255.0
Enter the default gateway IP address: 192.168.199.2

LIF nfs_test_lif1 on node cluster600-01, on port e0a with IP address 192.168.199.188 was created.
Do you want to create an additional LIF now? {yes, no} [no]:


Step 4: Configure NFS.
You can type "back", "exit", or "help" at any question.

NFS configuration for Vserver nfs_test created successfully.

Vserver nfs_test, with protocol(s) nfs has been configured successfully.


DELETING THE VSERVER
  • Now I deleted the vserver that we just created:
  • We need to delete the volumes 
  • Then delete the route
  • then the routing group
  • then the vserver.

cluster600::> volume offline -vserver nfs_test -volume nfs_test_vol1

Warning: Volume "nfs_test_vol1" on Vserver "nfs_test" must be unmounted before being taken offline or restricted.  Clients will not be able to access the affected volume and related junction paths
         after that.  Do you still want to unmount the volume and continue? {y|n}: y
Volume "nfs_test:nfs_test_vol1" is now offline.

Volume modify successful on volume: nfs_test_vol1
cluster600::> volume offline -vserver nfs_test -volume rootvol

Warning: Offlining root volume rootvol of Vserver nfs_test will make all volumes on that Vserver inaccessible.
Do you want to continue? {y|n}: y
Volume "nfs_test:rootvol" is now offline.

Volume modify successful on volume: rootvol
cluster600::> volume destroy -vserver nfs_test -volume nfs_test_vol1

Warning: Are you sure you want to delete volume "nfs_test_vol1" in Vserver "nfs_test" ? {y|n}: y
Volume "nfs_test:nfs_test_vol1" destroyed.

cluster600::> volume destroy -vserver nfs_test -volume rootvol

Warning: Are you sure you want to delete volume "rootvol" in Vserver "nfs_test" ? {y|n}: y
Volume "nfs_test:rootvol" destroyed.

cluster600::> vserver delete -vserver nfs_test

Warning: There are 1 routing group(s) associated with Vserver nfs_test. All these objects will be removed while deleting the Vserver.Are you sure you want to delete Vserver nfs_test and all objects
         associated with it? {y|n}: n

cluster600::> vserver delete -vserver nfs_test

Warning: There are 1 routing group(s) associated with Vserver nfs_test. All these objects will be removed while deleting the Vserver.Are you sure you want to delete Vserver nfs_test and all objects
         associated with it? {y|n}: n

cluster600::> routing-groups route delete -vserver nfs
    nfs600   nfs700   nfs800   nfs_test

cluster600::> routing-groups route delete -vserver nfs_test -routing-group d192.168.199.0/24 *
  (network routing-groups route delete)
1 entry was deleted.

cluster600::> routing-groups delete -vserver nfs_test -routing-group d192.168.199.0/24
  (network routing-groups delete)

cluster600::> vserver delete -vserver nfs_test

cluster600::>

CREATING VSERVER with vserver create command.

This includes creating all the associated parameters seperately.
  • Create vserver. 
  • Create routing group
  • create route
  • create interface
  • create data volume
  • mount the data volume

We will go through each of the steps in this example:

CREATE VSERVER

cluster600::> vserver create -vserver nfs_test -rootvolume rootvol -aggregate aggr1 -ns-switch nis -nm-switch file -rootvolume-security-style unix -language C.UTF-8 -snapshot-policy default        [Job 76] Job is queued: Create nfs_test.

Warning: NIS or LDAP has been specified as one of the sources for "-ns-switch". To avoid possible issues with Vserver's functionality and performance, configure NIS using the "vserver services
         nis-domain" commands, or LDAP using the "vserver services ldap" commands.
[Job 76] Job succeeded:
Vserver creation completed

cluster600::>

CREATE ROUTING GROUP and ROUTE CREATE
cluster600::> routing-groups create -vserver nfs_test -routing-group d192.168.199.0/24 -subnet 192.168.199.0/24 -role data -metric 20
  (network routing-groups create)

cluster600::>

cluster600::> routing-groups route create -vserver nfs_test -routing-group d192.168.199.0/24 -destination 0.0.0.0/0 -gateway 192.168.199.2 -metric 20
  (network routing-groups route create)

cluster600::>

CREATE INTERFACE

cluster600::> network interface create -vserver nfs_test -lif nfs_test_lif1 -role data -data-protocol cifs,nfs,fcache -home-node cluster600-01 -home-port e0a -address 192.168.199.188 -netmask 255.255.255.0 -routing-group d192.168.199.0/24 -status-admin up -failover-policy nextavail -firewall-policy data

cluster600::> network interface create -vserver nfs_test -lif nfs_test_lif1 -role data -data-protocol cifs,nfs,fcache -home-node cluster600-01 -home-port e0a -address 192.168.199.188 -netmask 255.255.255.0 -routing-group d192.168.199.0/24 -status-admin up -failover-policy nextavail -firewall-policy data

cluster600::>
cluster600::> network interface show
    show               show-routing-group show-zones

cluster600::> network interface show
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster600
            cluster_mgmt up/up    192.168.199.170/24 cluster600-01 e0a     true
cluster600-01
            mgmt1        up/up    192.168.199.171/24 cluster600-01 e0a     true
nfs600
            nfs600_lif1  up/up    192.168.199.180/24 cluster600-01 e0c     true
nfs700
            nfs700_lif1  up/up    192.168.199.181/24 cluster600-01 e0d     true
nfs800
            nfs800_nfs_lif1
                         up/up    192.168.199.182/24 cluster600-01 e0c     true
nfs_test
            nfs_test_lif1
                         up/up    192.168.199.188/24 cluster600-01 e0a     true
6 entries were displayed.

cluster600::>
cluster600::> exit
Goodbye


Connection to 192.168.199.170 closed.

CHECKING WITH PING TO CONFIRM 
xxxxx@ubuntu:~/monitoring/netapp/test$ ping 192.168.199.188
PING 192.168.199.188 (192.168.199.188) 56(84) bytes of data.
64 bytes from 192.168.199.188: icmp_seq=1 ttl=255 time=1.16 ms
64 bytes from 192.168.199.188: icmp_seq=2 ttl=255 time=0.477 ms

MOUNT THE DATA VOLUME:
The Junction path which I mentioned before:
If you try to mount the volume on /vol/vol_name you get a error. because the mount point does not exit.

cluster600::> volume mount -vserver nfs_test -volume nfs_test_vol1 -junction-path /vol/nfs_test_vol1

Error: command failed: Failed to create or determine if a junction exists within volume "rootvol". Error occurred with the remaining junction path of "/vol/nfs_test_vol1" for the given path of
       "/vol/nfs_test_vol1"  Reason: Junction create failed (2).

cluster600::> volume mount -vserver nfs_test -volume nfs_test_vol1 -junction-path /nfs_test_vol1


In the next section , we will create the mount point from the system shell and then mount the volume to 
/vol/nfs_test_vol1

Stay tuned :)
And don't forget to leave your comments.


Monday 22 February 2016

Feedback need the new look of the monitoring solution

Hi Guys, thanks for your support till now..
I am trying to give the monitoring solution a new look.. and a name, XeroSource.

the reference link for the dropdown is shared below:
http://anirvan_lahiri.net23.net/monitor_new/page.html

At the moment only the drop down for cluster works.. just added it to get the feel of how it would be.
It will have all the features as in the previous link:
http://anirvan_lahiri.net23.net/monitoring/monitor.php

Its work in progress now :)

Want feedback from you on the new look and design
Please share your comments and recommendations below. Looking forward for your support.
Thanks again.

Always grateful..
Anirvan lahiri
NCDA, NCIE


Thursday 18 February 2016

NetApp Boot Menu Ninja Mode

DO-NOT TRY THIS UNLESS YOU KNOW WHAT YOU ARE DOING.

To access the NetApp boot menu, user crtl-C during bootíng, Once you are in the boot menu you would generally find 8 selection options.

But there is more,.. try out this and see the magic: 22/7 (thats Ninja mode) 

Some screenshots..
All commands are not displayed in this example... need to connect to SP via some other platform, could be UNIX, for the complete list.




Please leave your comments.. if it was useful. Thanks.

Cluster mode loadshare promote


LOAD SHARE PROMOTE

create the new volume
cluster600::vserver*> volume create -vserver vs2 -volume root_ls1 -aggregate aggr_data -size 30m -type DP
create the relationship
cluster600::*> snapmirror create -source-path cl1://vs2/root_ls -destination-path cl1://vs2/root_ls1 -type LS -tries 8 -schedule 5min
initialize the relationship
cluster600::*> snapmirror initialize-ls-set -source-path cl1://vs2/root_ls
update
cluster600::*> snapmirror update-ls-set -source-path cl1://vs2/root_ls -destination-path cl1://vs2/root_ls1
promote the LS-volume
cluster600::*> snapmirror promote -destination-path cl1://vs2/root_ls1
Warning: Promote will delete the read-write volume cl1://vs2/root_ls and
replace it with cl1://vs2/root_ls1.
Do you want to continue? {y|n}:


NetApp clustermode snapmirror loadshare


NetApp clustermode snapmirror loadshare


By default, all client requests for access to a volume in an LS mirror set are granted read-only access. Read-write access is granted by accessing a special administrative mount point, which is the path that servers requiring read-write access into the LS mirror set must mount. All other clients will have read-only access. When you are accessing the admin share for write access, you are accessing the source volume. After changes are made to the source volume, the changes must be replicated to the rest of the
volumes in the LS mirror set using the snapmirror update-ls-set command, or with a scheduled
update.

example:
1. creating two 20MB volumes to serve as loadshare destinations
2. creating two loadshare mirrors on vserver grvsnfs1
3. initalize-ls the mirrors
4. create a schedule with a 1 minute interval
5. create two mountpoints on linux an mount the ls-mirrors
6. create a new volume in the vserver and watch the 1 minute update delay

1.
cluster600::> vol create -vserver grvsnfs1 -volume rootls -aggregate gr01_aggr1 -size 20MB -type DP
cluster600::> vol create -vserver grvsnfs1 -volume rootls1 -aggregate gr02_aggr1 -size 20MB -type DP

2.
cluster600::> snapmirror create -source-cluster gr -source-vserver grvsnfs1 -source-volume root_vol -destination-cluster gr -destination-vserver grvsnfs1 -destination-volume rootls -type ls
cluster600::> snapmirror create -source-cluster gr -source-vserver grvsnfs1 -source-volume root_vol -destination-cluster gr -destination-vserver grvsnfs1 -destination-volume rootls1 -type ls

3.
cluster600::> snapmirror initialize-ls-set -source-path gr://grvsnfs1/root_vol -foreground true

4.
cluster600::> job schedule interval create -name 1minute -minutes 1
cluster600::> snapmirror modify -destination-path kp://intdest/rootls1 -schedule 1minute

*taken that nfs is setup correctly and there are 2 lifs 1 on each of the 2 nodes
you can mount the loadshare mirror on both interfaces
lifs 192.168.4.85
192.168.4.88

5.
linux: mkdir /mnt/rootls
mkdir /mnt/rootls1
mount 192.168.4.85:/ /mnt/rootls
mount 192.168.4.88:/ /mnt/rootls1

6.
cluster600::> vol create -vserver grvsnfs1 -volume vol4 -aggregate gr02_aggr1 -size 100m -state online -type RW

cluster600::> vol mount -vserver grvsnfs1 -volume vol4 -junction-path /vol4 (volume mount)

Notice: Volume vol4 now has a mount point from volume root_vol. The load sharing (LS) mirrors of volume
root_vol are scheduled to be updated at 4/4/2013 15:39:53. Volume vol4 will not be visible in the
global namespace until the LS mirrors of volume root_vol have been updated.

linux:
while true
do
ls /mnt/rootls;ls /mnt/rootls1
sleep 10
done

*vol4 will pop up after about a minute.

Tuesday 16 February 2016

Understanding difference between 7-mode and c-dot from a systemshell point of view

There is a huge difference in the way the Ontap shell communicates with the layer beneath.. In a 7mode all the configuration files are stored in the the /etc directory... for example: /etc/rc, /ect/exportfs, /etc/hosts..

But when it comes to cluster mode.. we have the files present.. but they are actually not really used.. Instead all the files are managed by the RDB (replicating database).. 


replication ring is a set of identical processes running on all nodes in the cluster.
The basis of clustering is the replicated database (RDB). An instance of the RDB is maintained on each node in a cluster. There are a number of processes that use the RDB to ensure consistent data across the cluster. These processes include the management application (mgmt), volume location database (vldb), virtual-interface manager (vifmgr), and SAN management daemon (bcomd).
For instance, the vldb replication ring for a given cluster consists of all instances of vldb running in the cluster.
RDB replication requires healthy cluster links among all nodes in the cluster. If the cluster network fails in whole or in part, file services can become unavailable. The cluster ring show displays the status of replication rings and can assist with troubleshooting efforts..
Lets jump into the systemshell... and from there to the bash shell and exactly see how it is working.
PLEASE DON'T TRY IT IN PRODUCTION, UNLESS YOU KNOW WHAT YOU ARE DOING.
cluster600::*> systemshell
  (system node systemshell)

Data ONTAP/amd64 (cluster600-01) (pts/2)
login: diag
Password:
Last login: Fri Feb 12 11:33:17 from localhost
Warning:  The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly.  Use this environment
only when directed to do so by support personnel.

cluster600-01%
cluster600-01%
cluster600-01% sudo bash
bash-3.2#
bash-3.2#
bash-3.2# cd /mroot/etc/cluster_config/rdb
bash-3.2# ls
Bcom            Management      VLDB            VifMgr
bash-3.2#

These are the four files that contain are contained in the RDB.. Can we change it.. Yes we can, but needs lot of understanding as how it works to do it.. I have tried doing it and it works.. but its a long process. The better option is to use the cluster command line.. that's the actual process.. 
In senarios when the there is a disaster this is the way to go.. 
In the next blog, I'll walk through the way how the junction path works and whats the difference when it comes to 7mode and c-dot
PLEASE WRITE BACK IF THIS WAS USEFUL.

Monday 15 February 2016

IP address of a initiator in cluster-mode 8.3

To view the ip-addresses in an iSCSI session:
cluster600::qos> iscsi session show -vserver iscsi -t
Vserver: iscsi
Target Portal Group: o9oi
Target Session ID: 2
Connection ID: 1
Connection State: Full_Feature_Phase
Connection Has session: true
Logical interface: o9oi
Target Portal Group Tag: 1027
Local IP Address: 192.168.4.206
Local TCP Port: 3260
Authentication Type: none
Data Digest Enabled: false
Header Digest Enabled: false
TCP/IP Recv Size: 131400
Initiator Max Recv Data Length: 65536
Remote IP address: 192.168.4.245
Remote TCP Port: 55063
Target Max Recv Data Length: 65536

Restoring a vm from a Netapp Snapmirror (DP/XDP) destination.


1. List the available snapshots on mirrordestination.

cl1::> snap show -vserver nfs1 -volume linvolmir
—Blocks—
Vserver Volume Snapshot State Size Total% Used%
——– ——- ——————————- ——– ——– —— —–
nfs1 linvolmir
snapmirror.b640aac9-d77a-11e3-9cae-123478563412_2147484711.2015-10-13_075517 valid 2.26MB 0% 0%
VeeamSourceSnapshot_linuxvm.2015-10-13_1330 valid 0B 0% 0%
2 entries were displayed.

2. Create a flexclone.

cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
(volume clone create)
[Job 392] Job succeeded: Successful

3. Connect the correct export-policy to the new volume.

cl1::> vol modify -vserver nfs1 -volume linvol -policy data

The rest is done on the ESXi server.

4. Mount the datastore to ESXi.

~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
Connecting to NAS volume: clonedmir
clonedmir created and connected.

5. Register the VM and note the ID of the VM.

~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
174

6. Power on the VM.

~ # vim-cmd vmsvc/power.on 174
Powering on VM:

7. Your prompt will not return until you answer the question about moving or copying.
Open a new session to ESXi, and list the question.

~ # vim-cmd vmsvc/message 174
Virtual machine message _vmx1:
msg.uuid.altered:This virtual machine might have been moved or copied.
In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied.

If you don’t know, answer “I copied it”.
0. Cancel (Cancel)
1. I moved it (I moved it)
2. I copied it (I copied it) [default]
Answer the question.
The VMID is “174″, the MessageID is “_vmx1″ and the answer to the question is “1″

~ # vim-cmd vmsvc/message 174 _vmx1 1

Now the VM is started fully.

Just the commands

cl1::> cl1::> snap show -vserver nfs1 -volume linvolmir
cl1::> vol clone create -vserver nfs1 -flexclone clonedmir -junction-path /clonedmir -parent-volume linvol -parent-snapshot VeeamSourceSnapshot_linuxvm.2015-10-13_1330
cl1::> vol modify -vserver nfs1 -volume linvol -policy data
~ # esxcfg-nas -a clonedmir -o 192.168.4.103 -s /clonedmir
~ # vim-cmd solo/registervm /vmfs/volumes/clonedmir/linux/linux.vmx
~ # vim-cmd vmsvc/power.on 174
~ # vim-cmd vmsvc/message 174
~ # vim-cmd vmsvc/message 174 _vmx1 1

Let me know if you have any questions.. Thanks.

Featured post

Netapp monitoring solution from scratch

I created a netapp monitoring solution on a UNIX machine... and have uploaded the static files to a free php server for so that you under...