Monday, 20 January 2014

NetApp Ontap Upgrade: Single Node Controller.

Upgrading Ontap version on a single node controller.


Download the required Ontap version from support.netapp.com
Copy the zipped copy as it is to /vol/vol0/etc/softwares
You can use either CIFS or NFS
One more option is to use a web server to copy the file.

Once the software is copied, check software list

filer1> software list
813_q_image_FAS6070.tgz
RLM_FW41.zip
811P2_q_image.tgz
812P1_q_image.tgz
81_q_image.tgz
811P1_q_image.tgz
30802414.zip
813P3_q_image.tgz
RLM_FW40.zip


I downloaded and copied 813P3_q_image.tgz, which is now available in the software list.
Once confirmed, user software update to update the image as the primary image.

filer1> software update 813P3_q_image.tgz -r
software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds.
software: Depending on system load, it may take many minutes
software: to complete this operation. Until it finishes, you will
software: not be able to use the console.
Software update started on node filer1. Updating image1 package: file://localhost/mroot/etc/software/813P3_q_image.tgz current image: image2Mon

Listing package contents.
Decompressing package contents.

Invoking script (validation phase).
INSTALL running in check only mode
Mode of operation is UPDATE
Current image is image2
Alternate image is image1
Available space on boot device is 727 MB
Required  space on boot device is 265 MB
Kernel binary matches install machine type
Package MD5 checksums pass
Versions are compatible
Invoking script (install phase). This may take up to 30 minutes.
Mode of operation is UPDATE
Current image is image2
Alternate image is image1
Available space on boot device is 727 MB
Required  space on boot device is 265 MB
Kernel binary matches install machine type
Package MD5 checksums pass
Versions are compatible
Getting ready to install image
Syncing device...
Extracting to /cfcard/x86_64/freebsd/image1...
x BUILD
x CHECKSUM
x COMPAT.TXT
x INSTALL
x README.TXT
x VERSION
x cap.xml
x diags.tgz
x kernel
x perl.tgz
x platform.ko
x platfs.imgMon
x rootfs.img
Installed MD5 checksums pass
Installing diagnostics and firmware files
Installation complete. image1 updated on node filer1.
image1 will be set as the default boot image after a clean shutdown.
software: installation of 813P3_q_image.tgz completed.
[filer1:cmds.software.installDone:info]: Software: Installation of 813P3_q_image.tgz was completed.
 20 19:43:23 EET [filer1:cmds.software.installDone:info]: Software: Installation of 813P3_q_image.tgz was completed.
Please type "reboot" for the changes to take effect.

filer1> reboot
reboot
Total number of connected CIFS users: 2
     Total number of open CIFS files: 0
Warning: Terminating CIFS service while files are open may cause data loss!!
Enter the number of minutes to wait before disconnecting [5]: 1Waiting for PIDS: 1913.
Waiting for PIDS: 640.
Waiting for PIDS: 617.
Setting default boot image to image1... done.
.
Jan 20 Uptime: 27d3h42m42s
Top Shutdown Times (ms): {shutdown_wafl=8576(multivol=0, sfsr=0, abort_scan=0, snapshot=0, start=323, sync1=401, sync2=2, mark_fs=7850), shutdown_snapvault=4100, if_reset=2137, shutdown_raid=1125, shutdown_snapmirror=309, shutdown_dense=30, emsmisc_dump_buffer=28, shutdown_fm=28, nfs_off_all_vfilers=17, wafl_sync=11}
Shutdown duration (ms): {CIFS=17385, NFS=12926, ISCSI=12926, FCP=12926}
System rebooting...


Once the filer reboots, check the version with command version. The filer would be upgraded.

Monday, 13 January 2014

Stop unwanted console messages : CLI


Edit /vol/vol0/etc/syslog.conf and modify the line for /dev/console. 

If you comment it out, then no messages are written to the console. 
Or you can specify the minimum severity of messages written to the console. 
For example,  *.warning shows warnings and above, which excludes "info" 
and "notice" level messages. 

You should leave the configuration for /etc/messages alone since 
netapp autosupport reads that file.


ssh <filer_name> rdfile /vol/vol0/etc/syslog.conf


*.info /etc/messages

In the above example, I have updated the syslog.conf and the output is as above.


NetApp CLI shortcut keys

Shortcut keys are sometimes very useful.. feel free to try:

If you want to... 

  1. Move the cursor right one position Ctrl-F or the Right arrow key
  2. Move the cursor left one position Ctrl-B or the Left arrow key
  3. Move the cursor to the end of the line Ctrl-E
  4. Move the cursor to the beginning of the line Ctrl-A
  5. Delete all characters from the cursor to the end of the line Ctrl-K
  6. Delete the character to the left of the cursor and move the cursor left one position Ctrl-H
  7. Delete the line Ctrl-U
  8. Delete a word Ctrl-W
  9. Reprint the line Ctrl-R
  10. Abort the current command Ctrl-C
Feel free to add if I have missed any :)

NetApp Hardware Basics

In this post we will review some of the NetApp backend connectivity and hardware basics, like disk ownership, single path and multipath, partner cabling, etc. I decided to write this article since I’ve been working with NetApp for some time, and never found a NetApp document explaining this stuff, so learning and figuring this has been like a challenge to me and I think it would be nice to share my progress with other ones starting working with NetApp storage.
Let’s see what we are going to talk about:
  • Head FC ports
  • Shelves and modules
  • Disk ownership (hardware and software)
  • Cabling (single path and multipath, single nodes and clusters)

Head FC ports

Filers need FC ports in order to be able to connect to fabrics and shelves (there is a chance of having SAS shelves but we will only cover FC shelves on this post). This is the rear view of a FAS3020, as you can see, it comes with 4 FC ports (marked in orange):
image
These ports are named as 0a, 0b, 0c and 0d, and as said they can be used to connect shelves (configure them as initiators) or to connect to hosts (configure them as targets). You can define whether a FC port is  target or initiator using the fcadmin config command.
As you can see there are 4 expansion slots, here you can place new HBAs to provide the filer increased FC port availability, in this case the ports will be named ‘#X’, where ‘#’ stands for the number of slot where the HBA has been installed and ‘X’ stands for the port index which can be ‘a’ or ‘b’ since most of the HBAs are dual ports.
You can use sysconfig command in order to see how many ports your system has and what it is connected to them. What is connected under each port is known as loop or stack, for example, if under port 0a you have 3 shelves, it would be a loop or a stack composed by 3 shelves. Here it is an example from a NetApp simulator, in this case, as it is a simulator, different adapters will be named as v1, v2, v3, etc.:
image

Shelves and Modules

As any storage system besides having the controllers or heads, it has shelves where disks should be contained. NetApp has different types of disks, shelves and modules. Depending on the type of disk you select you will be able to use certain types of shelves and certain types of modules. Check the following diagram to understand the possible combinations (as we already said, we won’t cover SAS shelves, but you can take a look athttp://www.netapp.com/mx/products/storage-systems/disk-shelves-and-storage-media/disk-shelves-tech-specs-la.html for further information):
image
Let’s see what each box means:

Disk type:

  • ATA Disks: well… you must be already used with these types of disks.
  • SATA Disks: Also for this one, already well known.
  • FC disks: Fiber channel disks.
To read more on disks types, specially about supported types and sizes, there is a very good NOW article on disk supported types: Available disk capacity by disk size (you might require registering to read the article).

Shelf Models

  • ds14mk2: This shelf accepts 14 FC or ATA disks, it takes 3 rack units and accepts ESH2 and AT-FCX disks.
  • ds14mk4: This shelf accepts 14 FC disks, it takes 3 rack units and only accepts ESH4 disks.

Modules

Modules allow different disk shelves to be connected to the storage FC loops.
  • ESH2: This module is used for FC disks and it is connected to a FC 2GB bus.
  • ESH4: This module is used for FC disks and it is connected to a FC 4GB bus.
  • AT-FCX: This module is used for ATA or SATA disks and it is connected to a FC 2GB bus.
This is a general guide so you can understand the concepts and differences between disks, shelf types and modules, for further information you can see the technical specs of you storage box. Here you can see the rear view of a disk shelf with only one module installed. Each shelf has space for 2 modules, module A (in the top slot) and module B (in the bottom slot, empty in the next picture).
image
As we said before, shelves are connected to a loop or stack, each module has an IN port and an OUT port which allow shelf being chained, and between the modules you will see a little green display which will allow you to set the number of shelf inside the loop. In the following picture you can see a single FAS3020 controller with 2 shelves in loop 0a:
image
This is a single path configuration, but we will talk on that later. The picture shows how rx (receive) and tx (transfer) fibers from 0a port on the filer head are connected to IN port in module A in shelf 1, the from module A in shelf one rx and tx fibers are run from OUT port to IN port in shelf 2 module A.

Disk ownership

Well, here we will make a quick stop and just say there are two types of ownership for disks. But first, let’s define ownership, disks are owned by a filer which is the one that manages the LUNs, shares, exports, snapshots and the rest of the operations over the volumes it hosts, in a cluster scenario the filers must know it’s partner’s disks and only take ownership of those resources in case of a takeover.
About the types of ownership, we have:
  • Software based: The less common one, the ownership of the disks in managed with the disk assigncommand, disks owned by a filer might be distributed across all shelves belonging to the cluster.
  • Hardware based: The most common one, the filer connected to the A module of the shelves owns the disks.

Cabling

NetApp supports lot’s of cabling configurations, let’s start reviewing from the simplest one, to the more complex ones. As said in the previous sections of this post there are other supported configurations, such as metro cluster, but here we will cover the most common ones:
  • Single node, single path
  • Single node, multipath
  • Cluster, single path
  • Cluster, multipath

Single node, single path

The most simple configuration, this is the less redundant solutions since it has many points of failures, such as a single controller, only 1 loop to each stack and only one module per shelf. In this configuration, rx (receive) and tx (transfer) fibers are connected from any of the FC ports on the controller to IN port in module A in the first shelf of the loop or stack, then from module A in that shelf rx and tx fibers are run from OUT port to IN port in shelf 2 module A, and so on for all the shelves in each stack.
image

Single node, multipath

This configuration is more secure than the previous one since it reduces all the single point of failures but one (there is still only one controller):
image
As you can see, in this case shelves have 2 modules each and there are two loops connected to the same stack of shelves (0a, solid line, and 0c, dotted line).
Why have  we used 0a for primary loop and 0c for secondary? well… there is no actual limitation, you can use any adapter you like, but NetApp recommends to use 0a as primary and 0c as secondary and same for 0b / 0d, even across nodes, will talk on this later.
Remember when we talked about disk ownership? If hardware disk ownership has been set (which is in 99% of the cases) 0a loop, connected to the A modules, will own the disks, in case of a faulty module or fiber cable, resources would start being accessed using 0c loop. If you run sysconfig as we have seen early on this post, you would see there are 6 shelves on the system, 3 attached to 0a and three attached to 0c,  using environment shelfcommand and storage show disk –p command you can identify which shelves are duplicated and which is the loop connected to A module and which to B modules.
For example if you run environment shelf command, you would obtain something like this for each shelf on the system, if you have 3 shelves in 2 loops then you would see this 6 times:
Channel: v0
Shelf: 1
SES device path: local access: v0.17
Module type: LRC; monitoring is active
Shelf status: normal condition
SES Configuration, via loop id 17 in shelf 1:
logical identifier=0x0b00000000000000
vendor identification=XYRATEX
product identification=DiskShelf14
product revision level=1111
Vendor-specific information: 
Product Serial Number:          Optional Settings: 0×00
Status reads attempted: 844; failed: 0
Control writes attempted: 3; failed: 0
Shelf bays with disk devices installed:
13, 12, 11, 10, 9, 8, 6, 5, 4, 3, 2, 1, 0
with error: none
Power Supply installed element list: 1, 2; with error: none
Power Supply information by element:
[1] Serial number: sim-PS12345-1  Part number: <N/A>
Type: <N/A>
Firmware version: <N/A>  Swaps: 0
[2] Serial number: sim-PS12345-2  Part number: <N/A>
Type: <N/A>
Firmware version: <N/A>  Swaps: 0
Cooling Unit installed element list: 1, 2; with error: none
Temperature Sensor installed element list: 1, 2, 3; with error: none
Shelf temperatures by element:
[1] 24 C (75 F) (ambient)  Normal temperature range
[2] 24 C (75 F)  Normal temperature range
[3] 24 C (75 F)  Normal temperature range
Temperature thresholds by element:
[1] High critical: 50 C (122 F); high warning 40 C (104 F)
Low critical:  0C (32 F); low warning 10 C (50 F)
[2] High critical: 63 C (145 F); high warning 53 C (127 F)
Low critical:  0C (32 F); low warning 10 C (50 F)
[3] High critical: 63 C (145 F); high warning 53 C (127 F)
Low critical:  0C (32 F); low warning 10 C (50 F)
ES Electronics installed element list: 1, 2; with error: none
ES Electronics reporting element: none
ES Electronics information by element:
[1] Serial number: sim-LS12345-1  Part number: <N/A>
CPLD version: <N/A>  Swaps: 0
[2] Serial number: sim-LS12345-2  Part number: <N/A>
CPLD version: <N/A>  Swaps: 0
The first record marked in bold is the serial number for the shelf, since this is an output obtained from a simulated filer the serial number here is missing, and the other two serial numbers marked in bold identifies A and B modules in that shelf respectively. So serial numbers will help to understand which shelves are connected to which loops, then storage show disk –p command will help you to identify which is the primary loop:
image
As you can see the 3 simulated shelves we saw in the sysconfig output are connected only to v0 adapter (since NetApp simulator does not emulate multipathing to shelves), and also you can see primary port is A.

Cluster, single path

Take a look at the following configuration:
image
we have 2 nodes interconnected by an infiniband cable (this interconnect cable is used for heartbeat and other cluster related operations and checks), and then we have 2 stacks, one with 3 shelves, and another one with 2 shelves. The first stack has all the A modules connected to 0a loop in controller 1 while B modules are connected to 0c modules in controller 2, as in a single node multipath configuration 0a/0c is used, but the difference now resides in 0a loop belongs to the owning filer and 0c to the partner, not 0a and 0c in the same node. And then we have the very same configuration for node 2, 0a loop is connected to A modules in the second stack and 0c adapter in controller 1 is used to connect the partner to this stack.
In this configuration there is no single point of failure, but there is still one down side, if an A module, or a fiber between primary loops fails a takeover of the resources would be executed by the partner. For example, if module A in shelf 1 (upper one) on 0a loop in controller 1 fails (I know it might sound confusing, read it twice if necessary, I had to hehehe), controller 1 would lost connectivity to the whole stack, in this case, controller 2 would have to takeover resources from controller 1 in order to be able to continue servicing storage. Unfortunately, the takeover process implies cifs service to be restarted, all cifs connections are dropped, so it can be really disruptive.

Cluster, multipath

Ok, now take a look at the following graph and go crazy!

image
Let’s do some writing to describe connections because it is really hard to follow the lines hehe:
Controller 1 (left) has:
  • 0a loop connected to A modules in stack 1 (3 shelves, left).
  • 0c adapter is connected to B modules in stack 2 (2 shelves, right).
  • 0b loop is used to provide a second patch for controller 1 to stack 1, it is connected to B module OUT port in last shelf on the loop, this way if A module fails in any of the shelves on this stack Controller 1 would still have access to the disks on this stack without need to failover resources over the partner.
  • 0d port is connected to shelf 2 in stack 2 module A OUT port, this way if resources have been failed from controller 2 to controller 1 you still don’t have a single point of failure.
Controller 2 (right) has:
  • 0a loop connected to A modules in stack 2 (2 shelves, right).
  • 0c adapter is connected to B modules in stack 1 (3 shelves, left).
  • 0b loop is used to provide a second patch for controller 2 to stack 2, it is connected to B module OUT port in last shelf on the loop, this way if A module fails in any of the shelves on this stack Controller 2 would still have access to the disks on this stack without need to failover resources over the partner.
  • 0d port is connected to shelf 3 in stack 1 module A OUT port, this way if resources have been failed from controller 1 to controller 2 you still don’t have a single point of failure.
As you might have already guessed this is the most redundant configuration (at least among the standard ones, never worked with metro cluster for example so can’t talk about it), then only down side of this configuration is you have to use 2 FC ports per head in order to provide access to a stack of shelves, this might become really expensive in FAS6240 environment which might have LOTS of shelves in different stacks..
Hope you liked the article and helped you to understand some of the basics, as always, any questions or comments are welcomed.

Cluster mode NetApp

Introduction to Clustered Data OnTAP:
Data ONTAP 8 merges the capabilities of Data ONTAP 7G and Data ONTAP GX into a single code base with two distinct operating modes: 7-Mode, which delivers capabilities equivalent to the Data ONTAP 7.3.x releases, and Cluster-Mode, which supports multicontroller configurations with a global namespace and clustered file system. As a result, Data ONTAP 8 allows you to scale up or scale out storage capacity and performance in whatever way makes the most sense for your business.
With Cluster-Mode the basic building blocks are the standard FAS or V-Series HA pairs with which you are already familiar (active-active configuration in which each controller is responsible for half the disks under normal operation and takes over the other controller’s workload in the event of a failure). Each controller in an HA pair is referred to as a cluster “node”; multiple HA pairs are joined together in a cluster using a dedicated 10 Gigabit Ethernet (10GbE) cluster interconnect. This interconnect is redundant for reliability purposes and is used for both cluster communication and data movement.
What does Scale-out storage means to you?
Scale-out storage is the most powerful and flexible way to respond to the inevitable data growth and data management challenges in today’s environments. Consider that all storage controllers have physical limits to their expandability—for example, number of CPUs, memory slots, and space for disk shelves—that dictate the maximum capacity and performance of which the controller is capable.
If more storage or performance capacity is needed, you might be able to upgrade or add CPUs and memory or install additional disk shelves, but ultimately the controller will be completely populated, with no further expansion possible. At this stage, the only option is to acquire one or more additional controllers.
Historically this has been achieved by simple “scale-up,” with two options: either replace the old controller with a complete technology refresh, or run the new controller side by side with the original. Both of these options have significant shortcomings and disadvantages.
With this basic introduction on NetApp Clustered OnTAP, I would like to walk through the basic steps on setting up the a Clustered OnTAP
 Step: 1 Hardware setup
a. Connect controllers to disk shelves (FC connectivity)
b. NVRAM interconnect to high availability cable between partners (10GbE or infiniBand)
c. Connect controllers to network such that each node have exactly two connections to the dedicated cluster network, at least one data connection. Also the well known RLM connection for troubleshooting purposes when needed.
Note: Cluster connections must be on a network with dedicated cluster traffic, where as data and management connections are on a distinct network.
Step:2 Power-up
a. Power up network switches
b. Power up disk shelves
c. Power up storage controllers
Step:3 Firmware
a. During boot process press any key to enter the firmware
b. Two compact flash images: flash0a and flash0b are available. To ‘flash’ (put) a new image on primary flash one needs to configure management interface.
Note: For auto option of ifconfig, DHCP or BOOTP server must be available on management network. If it doesn’t one must run ifconfig addr= mask= gw=
c. Once the network is configured, ping to test and flash the image; run flash tftp://<tftp_server>/<path_to_image>flash0a
Step:4 Installing ONTAP 8.1
a. Run option 7 to install new software first
b. Enter a URL to ONTAP 8.1 tgz image
c. Allow the system to boot when complete
Note: One can type boot_primary if node stops at firmware prompt
Step:5 Initialize a Node
a. Run option 4
b. This initialization clears the three disks that the system uses for the first aggr that it creates and a vol0 root volume on it
c. This must be run on both nodes of each HA pair
Step:6 Cluster setup wizard
a. From boot menu, boot normally and login as “admin” with no password
b. The first node creates the cluster
c. The following information is required for the setup:
-Cluster name
-Cluster network ports and MTU size
-Cluster base license key
-Cluster management port, IP address, netmask, and default gateway
-Node management port, IP adress, netmask, and default gateway
-DNS domain name
-IP address of DNS server
d. Subsequent nodes join the cluster
Step:7 Normal boot sequence
a. Firmware loads the kernel from CF
b. Kernel mounts “/” root image from rootfs.img on CF
c. Init is loaded and startup scripts run
d. NVRAM kernel modules gets loaded
e. Tmgwd is started
f. D-blade, N-balde and other components are loaded
g. vol0 root volume is mounted from local D-blade
h. CLI and element manager are ready for use
Step:8 Create a cluster
cluster create -license -clustername -mgmt-port -mgmt-ip -mgmt-netmask -mgmt-gateway -ipaddr1 -ipaddr2 -netmask -mtu 9000
Step:9 Join a cluster
Run this command from the node that wants to join the cluster
cluster join -clusteripaddr -ipaddr -ipaddr2 -netmask -mtu 9000
Step:10 Licenses
a. Base
b. NFS
c. CIFS
d. iSCSI
e. FCP
f. SnapMirror_DP
g. SnapRestore
h. Flexclone
Note: One can add licenses in the cluster shell; system license add
Step:11 NTP
a. NTP is disabled by default and needs manual set up of date, time and time zone
system date modify
b. Verify and monitor
system services ntp config show
system services ntp server show
Done!

inodes in NetApp


inodes in NetApp


inodes is similar to number of files a volume can hold.

Data structures that contain information about files in Unix file systems that are created when a file system is created. Each file has an inode and is identified by an inode number (i-number) in the file system where it resides. inodes provide important information on files such as user and group ownership, access mode (read, write, execute permissions) and type.

There are a set number of inodes, which indicates the maximum number of files the system can hold.
The way to check inodes in 7-mode is to use df -i

filer>  df -i <vol_name>


ssh configuration for c-mode NetApp


ssh configuration for c-mode NetApp

If ssh keys exists, no need to create a new one. If not, you need to create a key.
  1. Create the user with a public key authentication method.
    netapp::> security login create -username monitor -application ssh -authmethod publickey -profile admin
  2. Copy the public key contents of the id_rsa.pub and place it between quotes in the security login publickey create command. (take caution not to add carriage returns or other data that modifies the keystring, leave it in one line)
    netapp::> security login publickey create -username monitor -index 1 -publickey "ssh-rsa
    AAAAB3NzaC1yc2EAAAABIwAAAQEA5s4vVbwEO1sOsq7r64V5KYBRXBDb2I5mtGmt0+3p1jjPJrXx4/
    IPHFLalXAQkG7LhV5Dyc5jyQiGKVawBYwxxSZ3GqXJNv1aORZHJEuCd0zvSTBGGZ09vra5uCfxkpz8nwaTeiAT232LS2lZ6RJ4dsCz+
    GAj2eidpPYMldi2z6RVoxpZ5Zq68MvNzz8b15BS9T7bvdHkC2OpXFXu2jndhgGxPHvfO2zGwgYv4wwv2nQw4tuqMp8e+
    z0YP73Jg0T3jV8NYraXO951Rr5/9ZT8KPUqLEgPZxiSNkLnPC5dnmfTyswlofPGud+qmciYYr+cUZIvcFaYRG+Z6DM/HInX7w==  monitor@eeadmin"
    Alternatively, you can use the load-from-uri function to bring the public key from another source.
    netapp::> security login publickey load-from-uri -username monitor -uri http://bjacobs-lnx/id_rsa.pubVerify creation.
  3. netapp::> security login publickey show -username monitor
    UserName: monitor Index: 1
    Public Key:
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA5s4vVbwEO1sOsq7r64V5KYBRXBDb2I5mtGmt0+3p1jjPJrXx4/
    IPHFLalXAQkG7LhV5Dyc5jyQiGKVawBYwxxSZ3GqXJNv1aORZHJEuCd0zvSTBGGZ09vra5uCfxkpz8nwaTeiAT232LS2lZ6RJ4dsCz+
    GAj2eidpPYMldi2z6RVoxpZ5Zq68MvNzz8b15BS9T7bvdHkC2OpXFXu2jndhgGxPHvfO2zGwgYv4wwv2nQw4tuqMp8e+
    z0YP73Jg0T3jV8NYraXO951Rr5/9ZT8KPUqLEgPZxiSNkLnPC5dnmfTyswlofPGud+qmciYYr+cUZIvcFaYRG+Z6DM/HInX7w== monitor@eeadmin
    Fingerprint:
    fd:cf:9e:06:50:4d:8c:19:5a:c6:36:0f:0f:9b:ef:bb
    Bubblebabble fingerprint:
    xunep-misif-magug-maryp-hikig-hycun-hisob-mymim-riryv-ryvam-toxox
    Comment:
  4. Test access from the host.
    monitor@eeadmin:~$ ssh 10.61.64.150
    The authenticity of host '10.61.64.150 (10.61.64.150)' can't be established.
    DSA key fingerprint is d9:15:cf:4b:d1:7b:a9:67:4d:b0:a9:20:e4:fa:f4:69.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '10.61.64.150' (DSA) to the list of known hosts.
    netapp::>

Featured post

Netapp monitoring solution from scratch

I created a netapp monitoring solution on a UNIX machine... and have uploaded the static files to a free php server for so that you under...