Controller fails catastrophically. Turns out no one knows where the last backup is, or it's 2 years old. System is down for days while programmer recreates everything from memory.
Solution
1. Establish a Backup Schedule
Daily: Automated backup of supervisor/central database to local storage
Weekly: Backup to external USB drive or cloud storage
Monthly: Archive one full backup to off-site location (fire-proof safe or cloud)
2. Backup Critical Components
Supervisor database (Niagara, Metasys, etc.)
Each JACE or controller program (config.bog files)
Graphics and UI configurations
User accounts and permissions database
Network configuration files
3. Automate Backups
Set up scheduled jobs to run at off-peak hours (2 AM, for example)
Niagara: Use built-in provisioning jobs to backup JACE stations to Supervisor
Metasys: Use Database Manager to schedule backups
Cloud: Use automated sync tools (OneDrive, Google Drive, AWS S3) for off-site copies
4. Test Backups Regularly (Monthly)
Don't assume backups work—test them
Procedure:
Restore backup to a test environment (VM or spare hardware)
Verify data integrity (can you read all points, histories, configurations?)
Document any issues
Too many organizations discover backups are corrupt only when they need them
5. Version Control
Keep at least 4 weeks of daily backups (so you can recover from 3+ weeks ago if corruption is slow)
Tag important versions (post-upgrade, post-major-config-change) so you can find them later
6. Document Recovery Procedure
Write down the exact steps to restore from backup
Include hardware requirements, software versions, credentials needed