Day10 of #90DaysOfDevops: Log Analyzer and Report Generator

Hey there..! It's day 10 of my DevOps learning journey. Today, I'm diving into shell scripting. I'm working on building a Log Analyzer and a Report Generator. We'll be analyzing log files, identifying specific events, and generating a summary report. Exciting stuff!

Here we go…!

  1. Input Validation: The first step in the script ensures that the user provides a log file as an argument. Without the log file, the script will not proceed and will display usage instructions. This avoids issues arising from missing input.

  2. Defining Keywords and Preparing the Report

    The script focuses on two important log keywords:

    • ERROR: To track and count general error messages.

    • CRITICAL: To capture significant events in the logs.

  3. Analyzing the Log File The script performs several key operations on the log file:

    1. Counting the Total Number of Lines: It calculates how many lines are present in the log file, providing a sense of its size.

    2. Counting the Occurrences of Errors: It searches for all instances of the word "ERROR" in the log file and counts how many times it appears.

    3. Listing Critical Events with Line Numbers: To help identify severe issues quickly, the script lists all log lines that contain the keyword "CRITICAL" along with their line numbers. This information can be extremely helpful for debugging.

  4. Identifying the Most Common Error Messages One of the key features of this script is its ability to track unique error messages and count how often each one occurs. It achieves this using an associative array to store the error messages and their frequencies.

  5. Archiving the Processed Log File To ensure that logs are not processed repeatedly, the script moves the processed log file to a specified directory (e.g., processed_logs). This ensures that all processed logs are organized and archived for future reference.

  6. Outputting the Summary Report Finally, the script outputs the summary report, which includes all key statistics about the log file: total lines, total errors, critical events, and the top 5 most common error messages. It also informs the user that the log file has been successfully archived.

These are the steps we follow.

#!/bin/bash

usage(){
    echo "Usage: $0 /home/ubuntu/logs/day02.log"
    exit 1
}

# Check if the correct number of arguments is provided
if [ $# -ne 1 ]; then
    usage
fi

LOG_FILE=$1

# Check if the log file exists
if [ ! -f "$LOG_FILE" ]; then
    echo "Error: Log file $LOG_FILE does not exist."
    exit 1
fi

ERROR_KEYWORD="ERROR"
CRITICAL_KEYWORD="CRITICAL"
DATE=$(date +"%Y-%m-%d")
SUMMARY_REPORT="summary_report_$DATE.txt"
ARCHIVE_DIR="processed_logs"

# Initialize the summary report with date and log file name
{
    echo "Date of analysis: $DATE"
    echo "Log file name: $LOG_FILE"
} > "$SUMMARY_REPORT"

# Calculate total lines in the log file and append to the summary report
TOTAL_LINES=$(wc -l < "$LOG_FILE")
echo "Total lines processed: $TOTAL_LINES" >> "$SUMMARY_REPORT"

# Count total error occurrences and append to the summary report
ERROR_COUNT=$(grep -c "$ERROR_KEYWORD" "$LOG_FILE")
echo "Total error count: $ERROR_COUNT" >> "$SUMMARY_REPORT"

# List critical events with line numbers and append to the summary report
echo "List of critical events with line numbers: " >> "$SUMMARY_REPORT"
grep -n "$CRITICAL_KEYWORD" "$LOG_FILE" >> "$SUMMARY_REPORT"                                                                                                
# Initialize an associative array to store error messages and their counts
declare -A error_messages
while IFS= read -r line; do
    if [[ "$line" == *"$ERROR_KEYWORD"* ]]; then
        message=$(echo "$line" | awk -F"$ERROR_KEYWORD" '{print $2}')
        ((error_messages["$message"]++))
    fi
done < "$LOG_FILE"

# Append the top 5 error messages and their counts to the summary report
echo "Top 5 error messages with their occurrence count: " >> "$SUMMARY_REPORT"
for message in "${!error_messages[@]}"; do
    echo "${error_messages[$message]} $message"
done | sort -rn | head -n 5 >> "$SUMMARY_REPORT"

# Check if the archive directory exists, if not create it
if [ ! -d "$ARCHIVE_DIR" ]; then
    mkdir -p "$ARCHIVE_DIR"
fi

# Move the processed log file to the archive directory
mv "$LOG_FILE" "$ARCHIVE_DIR/"

# Notify that the log file has been moved and display the summary report
echo "Log file has been moved to $ARCHIVE_DIR."
cat "$SUMMARY_REPORT"

let’s look at output

It will create file as summary_report_2024-09-28.txt and there it will store all the summary of the log file..!


Stay tuned for more insights as I continue my #90DayOfDevOps challenges. Over the next few weeks, I will be diving deeper into various DevOps practices, tools, and methodologies. I will share detailed tutorials, real-world examples, and practical tips to help you enhance your DevOps skills. If you have any questions or tips, feel free to share them in the comments. Your feedback and suggestions are invaluable and can help shape the content of these challenges. Let’s keep learning and growing together! 🚀