Day 10 - Building a Log Analyzer and Report Generator in Bash
As a system administrator managing multiple servers, handling large volumes of log data daily can become overwhelming. Each server produces a log file containing valuable system information, error messages, and critical event alerts. However, manually sifting through logs is time-consuming and prone to error. This challenge is designed to help streamline the process by building a Bash script to automate log analysis and generate a structured summary report.
Task Overview
The goal is to create a Bash script that automates log file analysis and provides insights into error occurrences and critical events. This task will involve:
Script Requirements
The script will accomplish the following tasks:
Implementing the Log Analyzer
Here’s a high-level breakdown of the script:
#!/bin/bash
# Check if a log file path was provided
if [[ -z "$1" ]]; then
echo "Usage: $0 /path/to/logfile"
exit 1
fi
logfile="$1"
report="summary_report_$(date +%Y-%m-%d).txt"
date_of_analysis=$(date +%Y-%m-%d)
total_lines=$(wc -l < "$logfile")
error_count=0
declare -A error_messages
critical_events=""
# Ensure the log file exists
if [[ ! -f "$logfile" ]]; then
echo "Log file does not exist."
exit 1
fi
# Analyze log file
while read -r line; do
# Count errors
if [[ "$line" =~ ERROR || "$line" =~ Failed ]]; then
((error_count++))
error_message=$(echo "$line" | awk '{print $NF}')
((error_messages["$error_message"]++))
fi
# Capture critical events with line numbers
if [[ "$line" =~ CRITICAL ]]; then
line_number=$(grep -n "$line" "$logfile" | cut -d: -f1)
critical_events+="$line_number: $line"$'\n'
fi
done < "$logfile"
# Sort and get top 5 error messages
sorted_errors=$(for msg in "${!error_messages[@]}"; do echo "${error_messages[$msg]} $msg"; done | sort -nr | head -5)
# Generate the summary report
{
echo "Date of Analysis: $date_of_analysis"
echo "Log File: $logfile"
echo "Total Lines Processed: $total_lines"
echo "Total Error Count: $error_count"
echo -e "\nTop 5 Error Messages:"
echo "$sorted_errors"
echo -e "\nCritical Events:"
echo -e "$critical_events"
} > "$report"
# Optional: Archive processed log file
archive_dir="archive"
mkdir -p "$archive_dir"
mv "$logfile" "$archive_dir"
echo "Report generated at: $report"
Explanation of Script Components
Recommended by LinkedIn
Running the Script
To test the script:
chmod +x log_analyzer.sh
Run the script with a log file path:
./log_analyzer.sh /path/to/sample_log.log
Sample Output
Generated Report (summary_report_2023-07-30.txt)
Date of Analysis: 2023-07-30
Log File: /path/to/sample_log.log
Total Lines Processed: 1500
Total Error Count: 50
Top 5 Error Messages:
10 Disk_Failure
8 Network_Drop
7 Timeout
5 Unauthorized_Access
4 Out_of_Memory
Critical Events:
123: CRITICAL - Disk Failure
678: CRITICAL - Unauthorized Access
1320: CRITICAL - Network Drop
Conclusion
This script provides a foundational tool for log analysis and reporting, offering insights into critical errors and system issues at a glance. By automating these tasks, system administrators can save time and maintain organized records for troubleshooting and audits. The optional archiving feature ensures clean log directories, making it easy to manage daily log files.