NetBackup™ Status Codes Reference Guide
- NetBackup status codes
- NetBackup status codes
- NetBackup KMS status codes
- NetBackup status codes
- Media Manager status codes
- Media Manager status codes
- Media Manager status codes
- Device configuration status codes
- Device configuration status codes
- Device configuration status codes
- Device management status codes
- Device management status codes
- Device management status codes
- Robotic status codes
- Robotic status codes
- Robotic status codes
- Robotic error codes
- Robotic error codes
- Robotic error codes
- Security services status codes
- Security services status codes
- Security services status codes
- NetBackup alert notification status codes
NetBackup status code: 8453
Explanation: The migration job runs before the primary deployment and after the configChecker, when the storage class name is changed in the environment.yaml
file. This migration job has not been created in the cluster.
Recommended Action: Perform the following as appropriate:
Review the NetBackup operator logs for details using the following command:
kubectl logs <netbackup-operator-pod-name> netbackup-operator -n<netbackup-operator-namespace>
Verify that the RBAC permissions for the job are correct. Refer to the NetBackup Deployment for Azure Kubernetes Cluster (AKS) Administrator's Guide.
If the issue is with the primary server CR, perform the following:
Delete the environment CR using the command: kubectl delete -f <environment.yaml>
Redeploy the environment again using the command: kubectl apply -f <environment.yaml>
If this issue occurs during data migration, perform the following:
Check the migration pod logs for details using the following command: kubectl logs <catalog-or-log-migration-job-name> -n <netbackup-environment-namespace>
If the NetBackup operator pod has one of the following messages:
Error while getting PVC for renaming.
Error while deleting old PVC.
Error while patching old PVC.
Error while renaming logs PVC.
Error while renaming catalog PVC.
Perform the following steps:
Manually copy or ignore the files or continue with the next steps.
Save the PVC's volume name and storage class as follows:
kubectl describe pvc <azure-disk- or-files-pvc-name> -n <netbackup-environment-namespace>
Delete the old Azure disk or files PVC, and rename the new Azure files PVC to the old Azure disk or files PVC name as follows:
kubectl delete pvc <azure-disk-or-files-pvc-name> -n <netbackup-environment-namespace> kubectl patch pv <saved_volume_name> --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
Ensure that the PV is available after completing the previous steps.
Create a new Azure files PVC with the old PV as follows:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <old_pvc_name> namespace: <old_pvc_namespace> spec: accessModes: - ReadWriteMany volumeName: <saved_volume_name> storageClassName: <saved_storage_class_name> resources: requests: storage: 100Gi #previous files size
Enable the probes /opt/veritas/vxapp-manage/nbu-health enable.
Set replica count to 1 or reapply
environment.yaml
file as follows:kubectl scale --replicas=1 <STS name> -n <netbackup-environment-namespace> or reapply the
environment.yaml
file.
Click here to view technical notes and other information on the Veritas Technical Support website about this status code.