Back to home

Designing Advanced Logging Strategies for AI Systems with Loguru

Advanced Logging with Loguru for AI Systems

In complex AI systems, traditional logging often falls short. When you have multiple tool calls, LLM interactions, and multi-step workflows, you need a logging system that is both powerful and easy to use. Enter loguru.

Why Loguru?

loguru is a library that aims to make logging in Python a pleasant experience. Here’s why it’s a great fit for AI agents:

Become a Medium member

1. Zero Setup: No more complex configuration dictionaries. Just from loguru import logger.

2. Rotation and Retention: Easily manage log file sizes and history.

3. Colorized Output: Makes it much easier to scan logs in the terminal.

4. Structured Logging: Easily add extra context to every log message.

Configuring Loguru for Production

In complex AI systems, logging isn’t just about printing messages-it’s about tracking the flow of your application, catching errors early, and keeping a persistent record for debugging. Loguru makes this effortless with its clean syntax, powerful formatting, and built-in support for log rotation and retention. A well-configured logging setup can combine colorized console output for real-time debugging and persistent file logging to keep your logs organized and manageable.

Logging AI-Specific Events

In an AI agent, you want to track more than just errors. It’s crucial to log tool calls, token usage, and user identity context. This allows you to understand exactly what your system is doing, how resources are being consumed, and which operations may be causing unexpected behavior. Properly structured logs make it easier to audit and optimize AI workflows over time.

Multi-Timezone Support with Patching

AI systems often operate across regions and timezones. Sometimes you need to display logs in multiple timezones simultaneously. By leveraging Loguru’s features, you can include custom timestamps for different zones, giving you clear context about when events occurred relative to users or servers. This is particularly useful for debugging multi-step workflows or analyzing the behavior of distributed AI agents.

import sys
from datetime import datetime, timedelta, timezone
from loguru import logger

# ----------------------------
# 1. Custom patcher for multi-timezone timestamps
# ----------------------------
def time_patcher(record):
    """
    Adds custom timestamps in multiple timezones to each log record.
    Example:
    - pkt_time: UTC+5 (e.g., Pakistan Standard Time)
    - pa_time: UTC-5 (e.g., US Eastern Time)
    """
    now = datetime.now(timezone.utc)
    record["extra"]["pkt_time"] = (now + timedelta(hours=5)).strftime("%Y-%m-%d %H:%M:%S")
    record["extra"]["pa_time"] = (now - timedelta(hours=5)).strftime("%Y-%m-%d %H:%M:%S")


# ----------------------------
# 2. Logging setup
# ----------------------------
def setup_logging():
    """
    Configure Loguru for advanced AI logging.
    - Colorized console output for real-time debugging
    - File logs with rotation, retention, and compression
    - Supports multi-timezone timestamps
    """
    # Remove default handler
    logger.remove()

    # Configure patcher for timezone support
    logger.configure(patcher=time_patcher)

    # Console logging: colorized, INFO level
    logger.add(
        sys.stdout,      # Send logs to the console (standard output)
        level="INFO",    # Only show logs with level INFO and above (skip DEBUG)
        enqueue=True,    # Make logging safe for multi-threaded or async code
        backtrace=False, # Don't show full traceback for exceptions (keeps logs cleaner)
        diagnose=False,  # Don't show local variable values on errors (simpler output)
        colorize=True    # Show colored logs in the console (INFO=green, WARNING=yellow, etc.)
        format="{time:YYYY-MM-DD HH:mm:ss} | "
               "{level} | "
               "{message} | "
               "PKT:{extra[pkt_time]} | PA:{extra[pa_time]}",
    )

    # File logging: persistent logs, rotation, retention, compression
    logger.add(
        "logs/app.log",
        rotation="500 MB",    # rotate when log exceeds 500 MB or specific time period like 1 day
        retention="10 days",  # keep logs for 10 days
        compression="zip",    # compress old logs
        format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {message} | PKT:{extra[pkt_time]} | PA:{extra[pa_time]}",
        level="DEBUG"
    )


# ----------------------------
# 3. AI-specific logging utilities
# ----------------------------
def log_tool_call(tool_name, args, user=None):
    """
    Log a tool invocation in the AI system.
    """
    user_str = f" | User: {user}" if user else ""
    logger.info(f"🛠️ TOOL CALL | {tool_name} | Args: {args}{user_str}")


def log_token_usage(prompt_tokens, completion_tokens, user=None):
    """
    Log token usage for AI model calls.
    """
    total = prompt_tokens + completion_tokens
    user_str = f" | User: {user}" if user else ""
    logger.info(f"📊 TOKENS | Input: {prompt_tokens} | Output: {completion_tokens} | Total: {total}{user_str}")


def log_workflow_start(workflow_name, user=None):
    user_str = f" | User: {user}" if user else ""
    logger.info(f"🚀 WORKFLOW START | {workflow_name}{user_str}")


def log_workflow_end(workflow_name, duration_seconds, user=None):
    user_str = f" | User: {user}" if user else ""
    logger.info(f"✅ WORKFLOW END | {workflow_name} | Duration: {duration_seconds:.2f}s{user_str}")


# ----------------------------
# 4. Example usage
# ----------------------------
if __name__ == "__main__":
    # Initialize logging
    setup_logging()

    # Simulate AI operations
    log_workflow_start("DocumentProcessing", user="faizan")
    log_tool_call("TextGenerator", {"prompt": "Hello, world!"}, user="faizan")
    log_token_usage(prompt_tokens=15, completion_tokens=30, user="faizan")
    log_tool_call("FileAnalyzer", {"file": "report.pdf"}, user="faizan")
    log_token_usage(prompt_tokens=40, completion_tokens=60, user="faizan")
    log_workflow_end("DocumentProcessing", duration_seconds=12.3, user="faizan")

    # General system logs
    logger.warning("⚠️ API rate limit approaching for external service")
    logger.debug("Step 3 completed successfully")
    logger.info("Logs stored in 'logs/app.log' with rotation and compression applied")

Conclusion

Good logging turns your AI system from a black box into a transparent, maintainable, and debuggable platform. By thoughtfully tracking events, token usage, and multi-timezone timestamps, you can scale your logging architecture alongside your AI workflows and ensure that debugging, monitoring, and auditing are both efficient and insightful.