Database Maintenance with Cron: Complete Automation Guide
Master database automation with cron jobs. Comprehensive guide covering automated backups, cleanup, optimization, monitoring, and disaster recovery for MySQL, PostgreSQL, MongoDB, and more.
What You'll Learn
- Automated backup strategies for all major databases
- Database cleanup and optimization automation
- Performance monitoring and alerting
- Log management and rotation
- Disaster recovery automation
- Cloud database integration (AWS RDS, Azure, GCP)
PostgreSQL Automation
Automated PostgreSQL Backups
# PostgreSQL backup cron schedule
# Full backup daily at 2 AM
0 2 * * * /opt/scripts/pg_backup.sh full
# Incremental backup every 6 hours
0 */6 * * * /opt/scripts/pg_backup.sh incremental
# WAL archive backup every 15 minutes
*/15 * * * * /opt/scripts/pg_backup.sh wal
# Backup verification daily at 3 AM
0 3 * * * /opt/scripts/verify_pg_backup.sh
Comprehensive PostgreSQL Backup Script
#!/bin/bash
# /opt/scripts/pg_backup.sh
set -e
BACKUP_TYPE=$1
PGHOST=${PGHOST:-localhost}
PGPORT=${PGPORT:-5432}
PGUSER=${PGUSER:-postgres}
PGDATABASE=${PGDATABASE:-myapp}
BACKUP_DIR="/var/backups/postgresql"
S3_BUCKET=${S3_BUCKET:-""}
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/pg_backup.log"
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}
# Create backup directory
mkdir -p $BACKUP_DIR
case $BACKUP_TYPE in
"full")
log "Starting full PostgreSQL backup"
# Create compressed backup
pg_dump -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE \
--verbose --clean --if-exists --create \
--format=custom --compress=9 \
--file="$BACKUP_DIR/full_backup_$TIMESTAMP.dump"
if [ $? -eq 0 ]; then
log "โ
Full backup completed: full_backup_$TIMESTAMP.dump"
# Upload to S3 if configured
if [ -n "$S3_BUCKET" ]; then
aws s3 cp "$BACKUP_DIR/full_backup_$TIMESTAMP.dump" \
"s3://$S3_BUCKET/postgresql/full/" \
--storage-class STANDARD_IA
log "๐ค Backup uploaded to S3"
fi
# Test backup integrity
pg_restore --list "$BACKUP_DIR/full_backup_$TIMESTAMP.dump" > /dev/null
if [ $? -eq 0 ]; then
log "โ
Backup integrity verified"
else
log "โ Backup integrity check failed"
exit 1
fi
else
log "โ Full backup failed"
exit 1
fi
;;
"incremental")
log "Starting incremental backup (WAL archive)"
# Archive WAL files
WAL_DIR="/var/lib/postgresql/data/pg_wal"
ARCHIVE_DIR="$BACKUP_DIR/wal_archive/$TIMESTAMP"
mkdir -p $ARCHIVE_DIR
# Copy WAL files that haven't been archived
find $WAL_DIR -name "*.partial" -prune -o -name "*" -type f \
-newer "$BACKUP_DIR/last_wal_backup" -print0 | \
xargs -0 -I {} cp {} $ARCHIVE_DIR/
# Update timestamp file
touch "$BACKUP_DIR/last_wal_backup"
# Compress archive
tar -czf "$BACKUP_DIR/wal_archive_$TIMESTAMP.tar.gz" -C $ARCHIVE_DIR .
rm -rf $ARCHIVE_DIR
# Upload to S3
if [ -n "$S3_BUCKET" ]; then
aws s3 cp "$BACKUP_DIR/wal_archive_$TIMESTAMP.tar.gz" \
"s3://$S3_BUCKET/postgresql/wal/"
fi
log "โ
WAL archive backup completed"
;;
"wal")
# Continuous WAL archiving
pg_receivewal -h $PGHOST -p $PGPORT -U $PGUSER \
-D "$BACKUP_DIR/wal_streaming" --synchronous
;;
esac
# Cleanup old backups
find $BACKUP_DIR -name "*.dump" -mtime +$RETENTION_DAYS -delete
find $BACKUP_DIR -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
log "๐งน Cleanup completed - removed backups older than $RETENTION_DAYS days"
PostgreSQL Maintenance Tasks
# PostgreSQL maintenance schedule
# VACUUM and ANALYZE weekly on Sunday at 3 AM
0 3 * * 0 /opt/scripts/pg_maintenance.sh vacuum
# REINDEX monthly on first Sunday at 4 AM
0 4 1-7 * 0 /opt/scripts/pg_maintenance.sh reindex
# Update statistics daily at 1 AM
0 1 * * * /opt/scripts/pg_maintenance.sh analyze
# Check for bloated tables weekly
0 5 * * 0 /opt/scripts/pg_maintenance.sh bloat_check
PostgreSQL Maintenance Script
#!/bin/bash
# /opt/scripts/pg_maintenance.sh
MAINTENANCE_TYPE=$1
PGHOST=${PGHOST:-localhost}
PGPORT=${PGPORT:-5432}
PGUSER=${PGUSER:-postgres}
PGDATABASE=${PGDATABASE:-myapp}
LOG_FILE="/var/log/pg_maintenance.log"
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}
case $MAINTENANCE_TYPE in
"vacuum")
log "Starting VACUUM ANALYZE"
# Get list of tables to vacuum
TABLES=$(psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE -t -c \
"SELECT schemaname||'.'||tablename FROM pg_tables WHERE schemaname = 'public';")
for table in $TABLES; do
log "Vacuuming table: $table"
psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE -c \
"VACUUM ANALYZE $table;" >> $LOG_FILE 2>&1
done
log "โ
VACUUM ANALYZE completed"
;;
"reindex")
log "Starting REINDEX"
# Reindex all indexes
psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE -c \
"REINDEX DATABASE $PGDATABASE;" >> $LOG_FILE 2>&1
log "โ
REINDEX completed"
;;
"analyze")
log "Updating table statistics"
psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE -c \
"ANALYZE;" >> $LOG_FILE 2>&1
log "โ
Statistics update completed"
;;
"bloat_check")
log "Checking for table bloat"
# Check for bloated tables (>50% bloat)
BLOATED_TABLES=$(psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDATABASE -t -c \
"SELECT schemaname||'.'||tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size,
ROUND((CASE WHEN pg_stat_get_live_tuples(c.oid) > 0
THEN pg_stat_get_dead_tuples(c.oid)::float / pg_stat_get_live_tuples(c.oid)
ELSE 0 END) * 100, 2) as bloat_ratio
FROM pg_tables pt
JOIN pg_class c ON c.relname = pt.tablename
WHERE schemaname = 'public'
AND (CASE WHEN pg_stat_get_live_tuples(c.oid) > 0
THEN pg_stat_get_dead_tuples(c.oid)::float / pg_stat_get_live_tuples(c.oid)
ELSE 0 END) > 0.5;")
if [ -n "$BLOATED_TABLES" ]; then
log "โ ๏ธ Bloated tables found:"
echo "$BLOATED_TABLES" | tee -a $LOG_FILE
# Send alert
echo "Bloated PostgreSQL tables detected on $(hostname)" | \
mail -s "Database Bloat Alert" dba@company.com
else
log "โ
No significant table bloat detected"
fi
;;
esac
MySQL Automation
MySQL Backup Automation
# MySQL backup schedule
# Full backup daily at 2 AM
0 2 * * * /opt/scripts/mysql_backup.sh full
# Incremental backup every 4 hours
0 */4 * * * /opt/scripts/mysql_backup.sh incremental
# Binary log backup every 30 minutes
*/30 * * * * /opt/scripts/mysql_backup.sh binlog
# Backup verification weekly
0 3 * * 0 /opt/scripts/verify_mysql_backup.sh
MySQL Backup Script
#!/bin/bash
# /opt/scripts/mysql_backup.sh
# ... (MySQL backup script logic) ...
MySQL Optimization Tasks
# MySQL optimization schedule
# Optimize tables weekly on Saturday at 4 AM
0 4 * * 6 /opt/scripts/mysql_optimize.sh
# Analyze tables daily at 1 AM
0 1 * * * /opt/scripts/mysql_analyze.sh
# Check for fragmented tables weekly
0 5 * * 6 /opt/scripts/mysql_check_fragmentation.sh
# Purge binary logs daily
0 0 * * * /opt/scripts/mysql_purge_binlogs.sh
MongoDB Automation
MongoDB Backup Strategy
# MongoDB backup schedule
# Full backup daily at 2 AM
0 2 * * * /opt/scripts/mongodb_backup.sh
# Oplog backup every 10 minutes
*/10 * * * * /opt/scripts/mongodb_oplog_backup.sh
# Sharded cluster balancer check
0 * * * * /opt/scripts/mongodb_check_balancer.sh
MongoDB Backup Script
#!/bin/bash
# /opt/scripts/mongodb_backup.sh
# ... (MongoDB backup script logic) ...
Database Performance Monitoring
Automated Performance Analysis
# Database monitoring schedule
# Check for slow queries every 15 minutes
*/15 * * * * /opt/scripts/db_monitor.sh slow_queries
# Monitor replication lag every 5 minutes
*/5 * * * * /opt/scripts/db_monitor.sh replication_lag
# Check for deadlocks every minute
* * * * * /opt/scripts/db_monitor.sh deadlocks
# Connection pool analysis hourly
0 * * * * /opt/scripts/db_monitor.sh connection_pool
Database Monitoring Script
#!/bin/bash
# /opt/scripts/db_monitor.sh
# ... (Database monitoring script logic) ...
Disaster Recovery Automation
Automated Failover and Recovery
# Disaster recovery schedule
# Check primary database health every minute
* * * * * /opt/scripts/check_primary_db.sh
# Replicate backups to offsite location hourly
0 * * * * /opt/scripts/replicate_backups.sh
# Test recovery process monthly
0 5 1 * * /opt/scripts/test_db_recovery.sh
Conclusion
Automating database maintenance with cron jobs is a cornerstone of reliable and scalable system administration. By implementing these automation strategies, you can ensure data integrity, optimize performance, and be prepared for disaster recovery.
Remember to customize these scripts for your specific environment, implement robust logging and alerting, and always test your automation in a non-production environment first.
Ready to Automate Your Database?
Start building your database maintenance automation with our interactive cron expression generator.