Common Cron Job Examples

A list of practical, real-world cron job examples you can use in your projects.

7 min read
Common Cron Job Examples: 50+ Real-World Use Cases

This comprehensive collection of practical cron job examples will help you automate system administration, web development, database maintenance, monitoring, and more.

💡 How to Use These Examples

1. Find an example that matches your use case

2. Replace paths, commands, and parameters with your values

3. Test the command manually before adding to cron

4. Add proper logging: command >> /var/log/job.log 2>&1

5. Use our cron generator to validate timing

⏰ Basic Timing Patterns

Start with these fundamental scheduling patterns:

* * * * *Every minute

Use for testing and high-frequency monitoring

* * * * * /usr/bin/test-script.sh
*/5 * * * *Every 5 minutes

Perfect for API health checks and log rotation

*/5 * * * * curl -s https://api.example.com/health
0 * * * *Every hour

Ideal for cache clearing and data synchronization

0 * * * * redis-cli FLUSHDB
0 2 * * *Daily at 2 AM

Best for backups and maintenance during low traffic

0 2 * * * /backup/daily-backup.sh
0 9 * * 1-5Weekdays at 9 AM

Great for business hours tasks and reports

0 9 * * 1-5 python3 /reports/daily-sales.py
0 0 1 * *Monthly on 1st

Perfect for monthly billing and reporting

0 0 1 * * /billing/generate-invoices.sh
0 0 * * *Daily at midnight

Common for daily reports and cleanup tasks

0 9-17 * * 1-5Business hours

Every hour during weekday work hours

🖥️ System Administration Examples

Log Management & Cleanup

Clean temporary files weekly

0 2 * * 0 find /tmp -type f -atime +7 -delete

Removes files older than 7 days from /tmp every Sunday at 2 AM

Rotate application logs

0 0 * * * /usr/sbin/logrotate /etc/logrotate.conf

Runs log rotation daily at midnight

Clean old log files

0 3 * * * find /var/log -name "*.log" -mtime +30 -delete

Delete log files older than 30 days at 3 AM daily

System Monitoring

Monitor disk space

*/15 * * * * df -h | awk '$5 > 80' | mail -s "Disk Alert" admin@example.com

Sends email alerts when disk usage exceeds 80%

Monitor system load

*/5 * * * * uptime >> /var/log/system-load.log

Logs system load every 5 minutes

Check running services

*/10 * * * * systemctl is-active nginx || systemctl restart nginx

Restart nginx if not running, check every 10 minutes

🌐 Web Development Examples

Application Maintenance

Clear application cache

0 1 * * * /var/www/html/artisan cache:clear

Laravel cache clearing at 1 AM daily

Generate sitemap

0 2 * * * /usr/bin/php /var/www/generate-sitemap.php

Update website sitemap daily at 2 AM

Process queued jobs

* * * * * cd /var/www && php artisan queue:work --stop-when-empty

Process Laravel queue jobs every minute

Analytics & Reporting

Generate daily reports

0 7 * * * /usr/bin/python3 /scripts/daily-analytics.py

Run analytics script every morning at 7 AM

Weekly performance report

0 9 * * 1 /scripts/performance-report.sh | mail -s "Weekly Report" team@company.com

Email performance metrics every Monday at 9 AM

🗄️ Database Maintenance Examples

Database Backups

MySQL full backup

0 2 * * * mysqldump -u backup_user -p'password' --all-databases > /backups/mysql_$(date +%Y%m%d).sql

Daily MySQL backup at 2 AM with date stamp

PostgreSQL backup

0 3 * * * pg_dumpall -U postgres > /backups/postgresql_$(date +%Y%m%d).sql

Daily PostgreSQL backup at 3 AM

MongoDB backup

0 1 * * * mongodump --out /backups/mongodb_$(date +%Y%m%d)

Daily MongoDB backup at 1 AM

Database Optimization

MySQL table optimization

0 4 * * 0 mysqlcheck -u admin -p'password' --optimize --all-databases

Weekly MySQL optimization every Sunday at 4 AM

PostgreSQL vacuum

0 5 * * 0 vacuumdb -U postgres --all --analyze

Weekly PostgreSQL vacuum and analyze

📊 Monitoring & Alerting Examples

Infrastructure Monitoring

Monitor CPU usage

*/5 * * * * top -bn1 | awk 'NR==3{if ($2>80) system("echo High CPU | mail -s ALERT admin@example.com")}'

Alert when CPU usage exceeds 80%

Memory usage monitoring

*/10 * * * * free -m | awk 'NR==2{if($3/$2*100 > 90) print "Memory: " $3/$2*100 "%" | mail -s "Memory Alert" admin@example.com}'

Check memory usage every 10 minutes

Service health checks

*/3 * * * * systemctl is-active nginx || systemctl restart nginx

Auto-restart nginx if it stops

🚀 DevOps & CI/CD Examples

Deployment & Automation

Auto-deploy from Git

*/30 * * * * cd /var/www && git pull origin main && npm install && npm run build

Check for updates and deploy every 30 minutes

Docker container cleanup

0 1 * * * docker system prune -af --volumes

Clean up unused Docker resources daily

Kubernetes pod monitoring

*/5 * * * * kubectl get pods --all-namespaces | grep -v Running | mail -s "Pod Issues" ops@example.com

Monitor for non-running pods every 5 minutes

🛡️ Security & Compliance Examples

Health Checks & Alerts

Website uptime check

*/5 * * * * curl -f https://example.com > /dev/null || echo "Site down" | mail -s "ALERT" admin@example.com

Check website every 5 minutes, send email if down

SSL certificate expiry check

0 9 * * * /scripts/ssl-check.sh example.com

Daily SSL certificate expiration check at 9 AM

Security Monitoring

Scan for failed login attempts

*/30 * * * * grep "Failed password" /var/log/auth.log | tail -20 | mail -s "Failed Logins" security@example.com

Monitor and report failed SSH login attempts every 30 minutes

Update security definitions

0 3 * * * freshclam && systemctl reload clamav-daemon

Update ClamAV virus definitions daily at 3 AM

📈 Data Processing & Analytics

ETL & Data Pipelines

Daily ETL pipeline

0 1 * * * python3 /data/etl_pipeline.py --date=$(date -d "yesterday" +\%Y-\%m-\%d)

Process previous day's data at 1 AM

Data aggregation

0 */4 * * * spark-submit /analytics/aggregate_metrics.py

Run Spark aggregation job every 4 hours

Machine learning model training

0 3 * * 0 python3 /ml/train_model.py --model=recommendation --data=weekly

Weekly model retraining on Sunday at 3 AM

Report Generation

Generate daily analytics report

0 7 * * * python3 /reports/daily_analytics.py | mail -s "Daily Report" team@example.com

Send daily analytics email at 7 AM

Export CSV data

0 0 * * * mysql -e "SELECT * FROM metrics WHERE date=CURDATE()-1" db > /exports/metrics_$(date +%Y%m%d).csv

Export daily metrics to CSV at midnight

✅ Best Practices & Tips

✨ Essential Best Practices

  • • Always use absolute paths in cron jobs
  • • Redirect output to log files for debugging
  • • Set PATH variable at the top of crontab
  • • Use flock to prevent overlapping executions
  • • Test commands manually before adding to cron
  • • Add email notifications for critical jobs
  • • Document each cron job with comments

⚠️ Common Pitfalls to Avoid

  • • Forgetting cron runs with minimal environment
  • • Not handling errors and failures properly
  • • Ignoring time zone differences
  • • Running resource-intensive jobs simultaneously
  • • Not monitoring cron job execution
  • • Using relative paths instead of absolute
  • • Forgetting to escape % characters

🔧 Pro Tips

Set environment variables

PATH=/usr/local/bin:/usr/bin:/bin SHELL=/bin/bash MAILTO=admin@example.com

Use lock files

*/5 * * * * flock -n /tmp/job.lock /path/to/script.sh

Log with timestamps

0 * * * * echo "$(date): Starting job" >> /var/log/job.log && /path/to/script.sh >> /var/log/job.log 2>&1

📝 Documentation Template

# Job: Daily database backup # Purpose: Backup production database # Schedule: 2 AM daily # Owner: devops@example.com # Dependencies: MySQL, AWS CLI 0 2 * * * /scripts/backup_db.sh

Always document your cron jobs!

🔍 Debugging & Backup Examples

Rsync backup to remote server

0 23 * * * rsync -avz --delete /important/data/ user@backup-server:/backups/$(hostname)/

Daily backup at 11 PM using rsync

S3 backup with compression

0 2 * * 0 tar -czf - /var/www | aws s3 cp - s3://my-backups/www-backup-$(date +%Y%m%d).tar.gz

Weekly compressed backup to AWS S3

Auto-deploy from Git

*/10 * * * * cd /var/www && git fetch origin main && git reset --hard origin/main

Pull latest changes every 10 minutes

Run automated tests

0 6 * * 1-5 cd /var/www && ./vendor/bin/phpunit

Run test suite every weekday morning at 6 AM

Docker container cleanup

0 3 * * 0 docker system prune -f && docker volume prune -f

Clean up unused Docker resources weekly

🔧 Best Practices for Production Use

Always Include:

  • • Absolute paths for all commands
  • • Proper output redirection with logging
  • • Error handling and exit codes
  • • Lock files to prevent overlapping jobs
  • • Email notifications for critical failures

Test Before Production:

  • • Run commands manually first
  • • Test with minimal cron environment
  • • Verify file permissions and ownership
  • • Check available disk space for outputs
  • • Monitor initial runs carefully

Ready to create your own cron job? Use our interactive generator!

Generate Cron Expression

Remember to test all cron jobs manually before adding them to production systems.

Need help with syntax? Try our cron generatoror learn about troubleshooting.

Ready to Create Your Cron Job?

Now that you understand the concepts, try our cron expression generator to create your own cron jobs!

Try Cron Generator