Veritas NetBackup™ CloudPoint Install and Upgrade Guide
- Section I. CloudPoint installation and configuration
- Preparing for CloudPoint installation
- CloudPoint host sizing recommendations
- Deploying CloudPoint using container images
- Deploying CloudPoint extensions
- Installing the CloudPoint extension on AWS (EKS)
- CloudPoint cloud plug-ins
- CloudPoint storage array plug-ins
- NetApp plug-in configuration notes
- Nutanix Files plug-in configuration notes
- Dell EMC Unity array plug-in configuration notes
- FUJITSU AF/DX plug-in configuration notes
- NetApp NAS plug-in configuration notes
- Dell EMC PowerStore plug-in configuration notes
- Dell EMC PowerStore NAS plug-in configuration notes
- Dell EMC PowerFlex plug-in configuration notes
- Dell EMC XtremIO SAN plug-in configuration notes
- Pure Storage FlashArray plug-in configuration notes
- Pure Storage FlashBlade plug-in configuration notes
- IBM Storwize plug-in configuration notes
- HPE RMC plug-in configuration notes
- HPE XP plug-in configuration notes
- Hitachi plug-in configuration notes
- Hitachi (HDS VSP 5000) plug-in configuration notes
- InfiniBox plug-in configuration notes
- Dell EMC PowerScale (Isilon) plug-in configuration notes
- Dell EMC PowerMax and VMax plug-in configuration notes
- Qumulo plug-in configuration notes
- CloudPoint application agents and plug-ins
- Oracle plug-in configuration notes
- Additional steps required after a SQL Server snapshot restore
- Protecting assets with CloudPoint's agentless feature
- Volume Encryption in NetBackup CloudPoint
- CloudPoint security
- Preparing for CloudPoint installation
- Section II. CloudPoint maintenance
- CloudPoint logging
- Upgrading CloudPoint
- Uninstalling CloudPoint
- Troubleshooting CloudPoint
Troubleshooting CloudPoint
Refer to the following troubleshooting scenarios:
CloudPoint agent fails to connect to the CloudPoint server if the agent host is restarted abruptly.
This issue may occur if the host where the CloudPoint agent is installed is shut down abruptly. Even after the host restarts successfully, the agent fails to establish a connection with the CloudPoint server and goes into an offline state.
The agent log file contains the following error:
Flexsnap-agent-onhost[4972] mainthread flexsnap.connectors.rabbitmq: error - channel 1 closed unexpectedly: (405) resource_locked - cannot obtain exclusive access to locked queue ' flexsnap-agent.a1f2ac945cd844e393c9876f347bd817' in vhost '/'
This issue occurs because the RabbitMQ connection between the agent and the CloudPoint server does not close even in case of an abrupt shutdown of the agent host. The CloudPoint server cannot detect the unavailability of the agent until the agent host misses the heartbeat poll. The RabbitMQ connection remains open until the next heartbeat cycle. If the agent host reboots before the next heartbeat poll is triggered, the agent tries to establish a new connection with the CloudPoint server. However, as the earlier RabbitMQ connection already exists, the new connection attempt fails with a resource locked error.
As a result of this connection failure, the agent goes offline and leads to a failure of all snapshot and restore operations performed on the host.
Workaround:
Restart the Veritas CloudPoint Agent service on the agent host.
On a Linux hosts, run the following command:
# sudo systemctl restart flexsnap-agent.service
On Windows hosts:
Restart the
Veritas CloudPoint™ Agent
service from the Windows Services console.
CloudPoint agent registration on Windows hosts may time out or fail.
For protecting applications on Windows, you need to install and then register the CloudPoint agent on the Windows host. The agent registration may sometimes take longer than usual and may either time out or fail.
Workaround:
To resolve this issue, try the following steps:
Re-register the agent on the Windows host using a fresh token.
If the registration process fails again, restart the CloudPoint services on the CloudPoint server and then try registering the agent again.
Refer to the following for more information:
Disaster recovery when DR package is lost or passphrase is lost.
This issue may occur if the DR package is lost or the passphrase is lost.
In case of Catalog backup, 2 backup packages are created:
DR package which contains all the certs
Catalog package which contains the data base
The DR package contains the NetBackup UUID certs and Catalog DB also has the UUID. When you perform disaster recovery using the DR package followed by catalog recovery, both the UUID cert and the UUID are restored. This allows NetBackup to communicate with CloudPoint since the UUID is not changed.
However if the DR package is lost or the Passphrase is lost the DR operation cannot be completed. You can only recover the catalog without DR package after you reinstall NetBackup. In this case, a new UUID is created for NetBackup which is not recognised by CloudPoint. The one-to-one mapping of NetBackup and CloudPoint is lost.
Workaround:
To resolve this issue, you must update the new NBU UUID and Version Number after NetBackup primary is created.
The NetBackup administrator must be logged on to the NetBackup Web Management Service to perform this task. Use the following command to log on:
/usr/openv/netbackup/bin/bpnbat -login -loginType WEB
Execute the following command on the primary server to get the NBU UUID:
/usr/openv/netbackup/bin/admincmd/nbhostmgmt -list -host <primary server host name> | grep "Host ID"
Execute the following command to get the Version Number:
/usr/openv/netbackup/bin/admincmd/bpgetconfig -g <primary Ssrver host name> -L
After you get the NBU UUID and Version number, execute the following command on the CloudPoint host to update the mapping:
/cloudpoint/scripts/cp_update_nbuuid.sh -i <NBU UUID> -v <Version Number>
The snapshot job is successful but backup job fails with error "The CloudPoint server's certificate is not valid or doesn't exist.(9866)" when ECA_CRL_CHECK disabled on master server.
If ECA_CRL_CHECK is configured on master server and is disabled then it must be configured in
bp.conf
on CloudPoint setup with same value.For example, considering a scenario of backup from snapshot where NetBackup is configured with external certificate and certificate is revoked. In this case, if ECA_CRL_CHECK is set as DISABLE on master then set the same value in
bp.conf
of CloudPoint setup, otherwise snapshot operation will be successful and backup operation will fail with the certificate error.CloudPoint fails to establish connection using agentless to the Windows cloud instance
Error 1: <Instance_name>: network connection timed out.
Case 1: CloudPoint server log message:
WARNING - Cannot connect to the remote host. SMB Connection timeout <IP address> <user> … flexsnap.OperationFailed: Could not connect to the remote server <IP address>
Workaround
To resolve this issue, try the following steps:
Verify if the SMB port 445 is added in the Network security group and is accessible from the CloudPoint server.
Verify if the SMB port 445 is allowed through cloud instance firewall.
Case 2: CloudPoint Server log message:
WARNING - Cannot connect to the remote host. WMI Connection timeout <IP address> <user> … flexsnap.OperationFailed: Could not connect to the remote server <IP address>
Workaround:
To resolve this issue, try the following steps:
Verify and add DCOM port (135) in the Network security group and is accessible from CloudPoint server.
Verify if the port 135 is allowed through cloud instance firewall.
Case 3: CloudPoint Server log message:
Exception while opening SMB connection, [Errno Connection error (<IP address>:445)] [Errno 113] No route to host.
Workaround: Verify if the cloud instance is up and running or not in inconsistent state.
Case 4: CloudPoint Server log message:
Error when closing dcom connection: 'Thread-xxxx'"
Where, xxxx is the thread number.
Workaround:
To resolve this issue, try the following steps:
Verify if the WMI-IN dynamic port range or the fixed port as configured is added in the Network security group.
Verify and enable WMI-IN port from the cloud instance firewall.
Error 2: <Instance_name>: Could not connect to the virtual machine.
CloudPoint server log message:
Error: Cannot connect to the remote host. <IP address> Access denied.
Workaround:
To resolve this issue, try the following steps:
Verify if the user is having administrative rights.
Verify if the UAC is disabled for the user.
CloudPoint cloud operations fail on a RHEL system if a firewall is disabled
The CloudPoint operations fail for all the supported cloud plugins on a RHEL system, if a firewall is disabled on that system when the CloudPoint services are running. This is a network configuration issue that prevents the CloudPoint from accessing the cloud provider REST API endpoints.
Workaround
Stop CloudPoint
# docker run --rm -it
-v /var/run/docker.sock:/var/run/docker.sock
-v /cloudpoint:/cloudpoint veritas/flexsnap-cloudpoint:<version> stop
Restart Docker
# systemctl restart docker
Restart CloudPoint
# docker run --rm -it
-v /var/run/docker.sock:/var/run/docker.sock
-v /cloudpoint:/cloudpoint veritas/flexsnap-cloudpoint:<version> start
Backup from Snapshot job and Indexing job fails with the errors
Jun 10, 2021 2:17:48 PM - Error mqclient (pid=1054) SSL Connection failed with string, broker:<hostname> Jun 10, 2021 2:17:48 PM - Error mqclient (pid=1054) Failed SSL handshake, broker:<hostname> Jun 10, 2021 2:19:16 PM - Error nbcs (pid=29079) Invalid operation for asset: <asset_id> Jun 10, 2021 2:19:16 PM - Error nbcs (pid=29079) Acknowledgement not received for datamover <datamover_id>
and/or
Jun 10, 2021 3:06:13 PM - Critical bpbrm (pid=32373) from client <asset_id>: FTL - Cannot retrieve the exported snapshot details for the disk with UUID:<disk_asset_id> Jun 10, 2021 3:06:13 PM - Info bptm (pid=32582) waited for full buffer 1 times, delayed 220 times Jun 10, 2021 3:06:13 PM - Critical bpbrm (pid=32373) from client <asset_id>: FTL - cleanup() failed, status 6
This can happen when the inbound access to CloudPoint on port 5671 and 443 port gets blocked at the OS firewall level (firewalld). Hence, from the datamover container (used for the Backup from Snapshot and Indexing jobs), communication to CloudPoint gets blocked. This results in the datamover container not being able to start the backup or indexing.
Workaround
Modify the rules in OS firewall to allow the inbound connection from 5671 and 443 port.
Agentless connection fails for a VM with an error message.
Agentless connection fails for a VM with the following error message when user changes the authentication type from SSH Key based to password based for a VM through the portal:
User does not have the required privileges to establish an agentless connection
This issue occurs when the sudoers file has the order messed up for the user as mentioned in the above error message.
Workaround:
Resolve the sudoers file issue for the user by providing the required permissions to perform the passwordless sudo operations.
When CloudPoint is deployed in private subnet (without internet) CloudPoint function fails
This issue occurs when CloudPoint is deployed in private network where firewall is enabled or public IP which is disabled. The customer's information security team would not allow full internet access to the virtual machine's.
Workaround
Enable the ports from the firewall command line using the following commands:
firewall-cmd --add-port=22/tcp
firewall-cmd --add-port=5671/tcp
firewall-cmd --add-port=443/tcp
Restoring asset from backup copy fails
In some of the scenarios it is observed that the connection resets intermittently in Docker container. Due to this the server sends more tcp payload than the advertised client window. Sometimes Docker container drops
packet from new TCP connection handshake. To allow these packets, use thenf_conntrack_tcp_be_liberal
option.If
nf_conntrack_tcp_be_liberal = 1
then the following packets are allowed:ACK is under the lower bound (possible overly delayed ACK)
ACK is over the upper bound (ACKed data not seen yet)
SEQ is under the lower bound (already ACKed data retransmitted)
SEQ is over the upper bound (over the window of the receiver)
If
nf_conntrack_tcp_be_liberal = 0
then those are also rejected as invalid.Workaround
To resolve the issue of restore from backup copy, use the
nf_conntrack_tcp_be_liberal = 1
option and set this value on node where datamover container is running.Use the following command for setting the value of
nf_conntrack_tcp_be_liberal
:sysctl -w net.netfilter.nf_conntrack_tcp_be_liberal=1
Some pods on Kubernetes extension progressed to completed state
Workaround
Disable Kubernetes extension.
Delete listener pod using the following command:
#kubectl delete pod flexnsap-listener-xxxxx -n <namespace>
Enable Kubernetes extension.
User is not able to customize a cloud protection plan
Workaround
Create a new protection plan with the desired configuration and assign it to the asset.
Podman container not starting or containers are not up after reboot
On RHEL 8.x platform, restarting container or machine reboot, the container displays the following error message:
# podman restart flexsnap-coordinator 47ca97002e53de808cb8d0526ae033d4b317d5386ce085a8bce4cd434264afdf: "2022-02-05T04:53:42.265084989+00:00 Feb 05 04:53:42 flexsnap-coordinator flexsnap-coordinator[7] agent_container_health_check flexsnap.container_manager: INFO - Response: b'{""cause"":""that name is already in use"",""message"":""error creating container storage: the container name \\""flexsnap-agent.15bd0aea11164f7ba29e944115001d69\\"" is already in use by \\""30f031d586b1ab524511601aad521014380752fb127a9440de86a81b327b6777\\"". You have to remove that container to be able to reuse that name.: that name is already in use"",""response"":500}\n'"
Workaround
Check if there is a file with IP address entry mapping to the container that could not be started at
/var/lib/cni/networks/flexsnap-network/
file system location.[ec2-user@ip-172-31-44-163 ~]$ ls -latr /var/lib/cni/networks/flexsnap-network/ total 16 -rwxr-x---. 1 root root 0 Jan 22 12:30 lock drwxr-xr-x. 4 root root 44 Jan 22 12:30 .. -rw-r--r--. 1 root root 70 Feb 4 14:47 10.89.0.150 -rw-r--r--. 1 root root 70 Feb 4 14:47 10.89.0.151 -rw-r--r--. 1 root root 70 Feb 4 14:47 10.89.0.152 -rw-r--r--. 1 root root 11 Feb 7 11:09 last_reserved_ip.0 drwxr-xr-x. 2 root root 101 Feb 7 11:13 . [ec2-user@ip-172-31-44-163 ~]$
From the above directory , delete the duplicate IP address file and perform the stop and start operation as follows:
Stop the container: #podman stop <container_name>
Start the container:#podman start <container_name>
After starting the start/stop services, CloudPoint, RabbitMQ and MongoDB containers are still in the starting state
It was observed that flexsnap-mongodb and flexsnap-rabbitmq containers did not go into healthy state. Following is the Below is the state of flexsnap-mongodb container:
[ec2-user@ip-172-31-23-60 log]$ sudo podman container inspect --format='{{json .Config.Healthcheck}}' flexsnap-mongodb {"Test":["CMD-SHELL","echo 'db.runCommand({ping: 1}).ok' | mongo --ssl --sslCAFile /cloudpoint/keys/cacert.pem --sslPEMKeyFile /cloudpoint/keys/mongodb.pem flexsnap-mongodb:27017/zenbrain --quiet"],"Interval":60,"Timeout":30000000000,"Retries":3} [ec2-user@ip-172-31-23-60 log]$ sudo podman container inspect --format='{{json .State.Healthcheck}}' flexsnap-mongodb {"Status":"starting","FailingStreak":0,"Log":null} [ec2-user@ip-172-31-23-60 log]$
Workaround
Run the following #podman CLI(s) command:
[ec2-user@ip-172-31-23-60 log]$ sudo podman healthcheck run flexsnap-mongodb [ec2-user@ip-172-31-23-60 log]$ sudo podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fe8cf001032b localhost/veritas/
flexsnap-fluentd:10.0.0.0.9817 2 days ago Up 45 hours ago 0.0.0.0:24224->24224/tcp flexsnap-fluentd 2c00500c1ac6 localhost/veritas/
flexsnap-mongodb:10.0.0.0.9817 2 days ago Up 45 hours ago (healthy) flexsnap-mongodb 7ab3e248024a localhost/veritas/
flexsnap-rabbitmq:10.0.0.0.9817 2 days ago Up 45 hours ago (starting) flexsnap-rabbitmq [ec2-user@ip-172-31-23-60 log]$ sudo podman healthcheck run flexsnap-rabbitmq [ec2-user@ip-172-31-23-60 log]$ sudo podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fe8cf001032b localhost/veritas/
flexsnap-fluentd:10.0.0.0.9817 2 days ago Up 45 hours ago 0.0.0.0:24224->24224/tcp flexsnap-fluentd 2c00500c1ac6 localhost/veritas/
flexsnap-mongodb:10.0.0.0.9817 2 days ago Up 45 hours ago (healthy) flexsnap-mongodb 7ab3e248024a localhost/veritas/
flexsnap-rabbitmq:10.0.0.0.9817 2 days ago Up 45 hours ago (healthy) flexsnap-rabbitmq [ec2-user@ip-172-31-23-60 log]$ sudo podman container inspect --format='{{json .State.Healthcheck}}' flexsnap-mongodb {"Status":"healthy","FailingStreak":0,"Log":
[{"Start":"2022-02-14T07:32:13.051150432Z","End":"2022-02-14T07:32:13.444636429Z","ExitCode":0,"Output":""}]} [ec2-user@ip-172-31-23-60 log]$ sudo podman container inspect --format='{{json .State.Healthcheck}}' flexsnap-rabbitmq {"Status":"healthy","FailingStreak":0,"Log":
[{"Start":"2022-02-14T07:32:46.537804403Z","End":"2022-02-14T07:32:47.293695744Z","ExitCode":0,"Output":""}]} [ec2-user@ip-172-31-23-60 log]$
Certificate generation would fail while registering CloudPoint with NetBackup
Starting CloudPoint release 9.1.2, NetBackup certificate generation will happen synchronously with registration in register API of CloudPoint. Hence, any failure in certificate generation will cause failure while registering CloudPoint with NetBackup, that is ading or editing the CloudPoint server entry from Web UI. These certificates are used for datamover which is launched for operations like backup from snapshot, restore from backup, indexing (VxMS based), and so on. Hence, if certificate generation fails, these jobs cannot be performed. Hence CloudPoint on cloud VMs cannot connect to NetBackup on lab VMs, hence the registration will fail, and hence CloudPoint cannot be added to NetBackup.
Workaround
To add CloudPoint in such scenario requires to skip certificate generation on CloudPoint by adding the following entry in
/cloudpoint/flexsnap.conf
file:[client_registration] skip_certificate_generation = yes
Default timeout of 6 hours is not allowing restore of larger database (size more than 300 GB)
Workaround
Configurable timeout parameter value can be set to restore larger database. The timeout value can be specified in
/etc/flexsnap.conf
file offlexsnap-coordinator
container. It does not require restart of the coordinator container. Timeout value would be picked up in next database restore job.User must specify the timeout value in seconds as follows:
docker exec -it flexsnap-coordinator bash root@flexsnap-coordinator:/# cat /etc/flexsnap.conf [global] target = flexsnap-rabbitmq grt_timeout = 39600
Plugin information is duplicated, if CloudPoint registration has failed in previous attempts
This occurs only when CloudPoint has been deployed using the MarketPlace Deployment Mechanism. This issue is observed when the plugin information is added before the registration. This issue creates duplicate plugin information in the
file.Workaround
Manually delete the duplicated plugin information from the
file.For example, consider the following example where the duplicate entry for GCP plugin config is visible (in bold) in
file:
[
{
"CPServer1": [
{
"Plugin_ID": "test",
"Plugin_Type": "aws",
"Config_ID": "aws.8dda1bf5-5ead-4d05-912a-71bdc13f55c4",
"Plugin_Category": "Cloud",
"Disabled": false
}
]
},
{
"CPServer2": [
{
"Plugin_ID": "gcp.2080179d-c149-498a-bf1f-4c9d9a76d4dd",
"Plugin_Type": "gcp",
"Config_ID": "gcp.2080179d-c149-498a-bf1f-4c9d9a76d4dd",
"Plugin_Category": "Cloud",
"Disabled": false
},
]
}