NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Configurations
- Prerequisites
- Recommendations and Limitations
- Configurations
- Configuration of key parameters in Cloud Scale deployments
- Section II. Deployment
- Section III. Monitoring and Management
- Monitoring NetBackup
- Monitoring Snapshot Manager
- Monitoring MSDP Scaleout
- Managing NetBackup
- Managing the Load Balancer service
- Managing PostrgreSQL DBaaS
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- Section IV. Maintenance
- MSDP Scaleout Maintenance
- PostgreSQL DBaaS Maintenance
- Patching mechanism for Primary and Media servers
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- View the list of operator resources
- View the list of product resources
- View operator logs
- View primary logs
- Socket connection failure
- Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
- Resolving the issue where the NetBackup server pod is not scheduled for long time
- Resolving an issue where the Storage class does not exist
- Resolving an issue where the primary server or media server deployment does not proceed
- Resolving an issue of failed probes
- Resolving token issues
- Resolving an issue related to insufficient storage
- Resolving an issue related to invalid nodepool
- Resolving a token expiry issue
- Resolve an issue related to KMS database
- Resolve an issue related to pulling an image from the container registry
- Resolving an issue related to recovery of data
- Check primary server status
- Pod status field shows as pending
- Ensure that the container is running the patched image
- Getting EEB information from an image, a running container, or persistent data
- Resolving the certificate error issue in NetBackup operator pod logs
- Pod restart failure due to liveness probe time-out
- NetBackup messaging queue broker take more time to start
- Host mapping conflict in NetBackup
- Issue with capacity licensing reporting which takes longer time
- Local connection is getting treated as insecure connection
- Primary pod is in pending state for a long duration
- Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
- Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
- Taint, Toleration, and Node affinity related issues in cpServer
- Operations performed on cpServer in environment.yaml file are not reflected
- Elastic media server related issues
- Failed to register Snapshot Manager with NetBackup
- Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
- Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Upgrading Snapshot Manager
Ensure that all the steps mentioned in the following section are performed before performing the upgrade of Snapshot Manager operator:
Preparing for NetBackup upgrade
Upgrading the Snapshot Manager operator
- Push the new operator images, Snapshot Manager main image to container registry with different tags.
- Update the new image name and tag in
images.cloudpointoperatorsection inkustomization.yamlfile in operator folder available in the new package folder. - Update the node selector and tolerations in
operator_patch.yamlfile inoperator/patchesfolder in the new package folder. - To upgrade the operator, apply the new image changes using the following command:
kubectl apply -k <operator folder name>
After applying the changes, new Snapshot manager operator pod will start in operator namespace and run successfully.
Edit the field in the CR to upgrade Snapshot Manager using environment CR. MODIFY event will be sent to Snapshot Manager operator which will trigger upgrade workflow.
To upgrade Snapshot Manager
- Update the variables appropriately:
NB_VERSION=10.4.0 OPERATOR_NAMESPACE="netbackup-operator-system" ENVIRONMENT_NAMESPACE="ns-155" NB_DIR=/home/azureuser/VRTSk8s-netbackup-${NB_VERSION}/ - Edit the
operator/kustomization.yamlfile as follows:KUSTOMIZE_FILE=${NB_DIR}operator/kustomization.yaml nano $KUSTOMIZE_FILE
Update the newName and newTag under
cloudpointoperator. - Upgrade the operator using the following command:
cd $NB_DIR kubectl apply -k operator sleep 20s
- Check (Wait for) if the operator is upgraded and running:
kubectl describe pod $(kubectl get pods -n $OPERATOR_NAMESPACE | grep flexsnap-operator | awk '{printf $1" " }') | grep Image: kubectl get all -n $OPERATOR_NAMESPACE
- Once the operator is upgraded successfully and it is running, update the cpServer.tag in
environment.yamlfile as follows:nano ${NB_DIR}/environment.yaml
- Delete the cpServer.credential section from
environment.yamlfile. - Apply
environment.yamlfile to start upgrading Snapshot Manager services using the following command:kubectl apply -f ${NB_DIR}/environment.yaml -n $ENVIRONMENT_NAMESPACE
- Check upgrade logs in flexsnap-operator using the following command:
kubectl logs -f $(kubectl get pods -n $OPERATOR_NAMESPACE | grep flexsnap-operator | awk '{printf $1" " }')
- Check Snapshot Manager status using the following command:
kubectl get cpserver -n $ENVIRONMENT_NAMESPACE