Friday 20 April 2018

NetApp Cluster interconnect error: csm.sessionFailed: Cluster interconnect session

If you experience the following error messages on a NetApp controller then it can be a bug.

 csm.sessionFailed: Cluster interconnect session (req=<node name> :dblade, rsp=<node name>:dblade, uniquifier=<ID>) failed with record state ACTIVE and error CSM_CONNABORTED.

The bug has been fixed in 8.3.2 and 9.x. For details about the bug, refer to the BURT here

Sunday 15 April 2018

NetApp SSH session timeout

When working with NetApp Storage systems, it is best to configure SSH session timeout so that the idle SSH sessions to NetApp controllers are closed.

You can do the session timeouts by using the command 'system timeout show' and 'system timeout modify <mins>' .
And if you are feeling lazy, you can set the session timeout to zero and it wont logout idle sessions. 

Thursday 12 April 2018

NetApp : how to list environment environment from cluster shell

Here are the commands that help you list the environment variables of a NetApp FAS controller (without having to reboot the node to loader prompt and type printenv)

::> set diagnostic 
::*> debug kenv show -node <node name> 

Wednesday 11 April 2018

Accessing etc files of NetApp via HTTP (error 403 - auto indexing disabled)

If you are accessing the NetApp files on the root volume and you get an error 403 - auto indexing disabled then you can enable autoindexing on the controller.



node> options httpd.autoindex.enable on

NetApp Cluster HA is not working correctly | ALERT rdb.ha.mboxError: Bidirectional failover under the 'cluster HA' configuration is not currently functional due to problem with the on-disk mailboxes.

On one my NetApp 8.3 cluster, the following errors were seen. Here are the steps that were taken to resolve the issue.

From Event logs 
ALERT         rdb.ha.mboxError: Bidirectional failover under the 'cluster HA' configuration is not currently functional due to problem with the on-disk mailboxes.

From command output
   *>Cluster ha show 



   High Availability Configured: true
   High Availability Backend Configured (MBX): false

   Warning: Cluster HA is not working correctly. Make sure that both nodes are healthy by using the "cluster show" command; then reconfigure cluuster HA to correct the configuration. Check the output of "cluster ha show" following the reconfiguration to verify node
   health. If reconfiguring cluster HA does not resolve the issue, contact technical support for assistance.


First verify, the VLDBs are in sync , especially MGWD. Once the VLDBs were verified, the cluster HA was disabled and enabled. 

::*> cluster ha  modify -configured false

Warning: The High Availability (HA) configuration SFO mailbox data appears to be damaged or absent, preventing a normal exit from HA configuration.  In order to forcibly exit safely, it is required that all
         management services be online on both nodes. Please verify this before continuing. The system will exit HA without zeroing the mailbox data.
Do you want to continue? {y|n}: y

Notice: HA is disabled.

::*> cluster ha show
      High Availability Configured: false
      High Availability Backend Configured (MBX): false

Warning: Cluster HA has not been configured.  Cluster HA must be configured
         on a two-node cluster to ensure data access availability in the
         event of storage failover. Use the "cluster ha modify -configured
         true" command to configure cluster HA.

::*> cluster ha  modify -configured true

Warning: High Availability (HA) configuration for cluster services requires that both SFO storage failover and SFO auto-giveback be enabled. These actions will be performed if necessary.
Do you want to continue? {y|n}: y

Notice: HA is configured in management.

::*> cluster ha show
      High Availability Configured: true
      High Availability Backend Configured (MBX): true --> YEAH!!!







Tuesday 10 April 2018

CommVault VM backup job fails with Failed to Open Disk

If you have a backup copy job in CommVault that fails with the error " Failed to Open Disk" then right click the job and select view logs.

Click 'view all' on the log file window and then search for 'Failed to Open Disk'.

In my case, the error was because there was an independent disk on the VM. Click here for more information about independent disks.

If you would like the backup copy to complete  with errors instead of failing the job, then you set an additional setting to the media agent
http://documentation.commvault.com/additionalsetting/details?name="IgnoreUnsupportedDisks"&id=1318 


Updated: Dec-2020
Another reason why backup can run into 'Failed to open disk' is when there is an unintialized disk within the windows VMs. So check on the disk management UI if there are any uninitialized disks present. 

Commvault : DR backup to cloud fails to run

 The Commvault DR backup to cloud (an option within Control Panel of Commvault console) was reporting failures.  The CVCloudService.log repo...