451 Research Highlights Kloudfuse’s Market Leadership in Observability

451 Research Highlights Kloudfuse’s Market Leadership in Observability

451 Research Highlights Kloudfuse’s Market Leadership in Observability

High Cardinality Blog Series - Part 4

High Cardinality Blog Series - Part 4

Navigating High Cardinality in Events, Logs, and Traces

Navigating High Cardinality in Events, Logs, and Traces

By

Jagannath Timma

By

Pralav Dessai

Published on

Aug 8, 2024

In observability, metrics, events, logs, and traces are common types of data that are collected and analyzed. In our last blog, we covered high cardinality in metrics. In this blog we will provide a comprehensive overview into managing high cardinality effectively across events, logs, and trace data streams.

  1. Events 

    Events, in contrast to metrics, are individual occurrences that are logged or recorded as they happen, rather than being continuously structured into a time series. Therefore, the cardinality of events tends to be more manageable. 

    Each event typically contains various attributes or properties that describe it, such as a timestamp, user ID, action performed, and any relevant contextual information. For example, in a web application, each user interaction (like clicking a button or submitting a form) can be tracked as an event.

    High cardinality in events means that there are many distinct values for certain attributes (e.g., user ID) within the events. High cardinality in events can be useful because it allows for detailed analysis and segmentation of data, where you can drill down into specific subsets of events based on these attributes (e.g., request delay by user ID) to gain insights into user behavior, patterns, and anomalies.

  2. Logs

    Log observability is crucial for monitoring and troubleshooting applications and systems. Similar to events, each log entry represents a discrete occurrence or message generated by an application or system component. High cardinality in logs happens when various fields within logs (or the combination of them) result in a large number of unique values. 
    For example, consider a scenario where you have a web server that logs each HTTP request to a website. Each log entry contains various fields such as User-Agent, IP Address, and Request Path, contributing to high cardinality data: 

    • User-Agent: This field contains information about the browser or user making the request. There can be thousands of different User-Agent strings, each representing a unique combination of browser type, version, operating system, and so on.

    • IP Address: If each request also logs the IP address of the user making the request, it could lead to millions of unique IP addresses accessing the server.

    • Request Path: The URL path typically includes query parameters requested by the user for a specific search. This can lead to high cardinality attributes as each URL path is unique.

    It's important to note that while individual logs are used to pinpoint the root causes of application or system failures, analyzing the cardinality of log entries over time can reveal patterns that signal anomalies or deviations from normal operations. For example a sudden spike from a specific IP address could indicate unusual or perhaps suspicious behavior. As log data trends over time, its increasing cardinality poses challenges for storage and processing.

  3. Traces

    Traces typically represent complex interactions across distributed systems or microservices. Each trace may include multiple spans or events, each with its own set of attributes. For example, traces often involve capturing unique identifiers such as transaction IDs, session IDs, or user IDs. If these identifiers have a wide range of distinct values (high cardinality), it results in a large number of unique trace entries. Managing and storing numerous unique traces can quickly escalate storage requirements and computational overhead.

Screenshot: Cardinality Analytics in Kloudfuse APM shows attributes with high cardinality in trace data, and identifies related spans

Managing high cardinality requires strategies that balance thorough data collection for diagnostics with efficient storage and processing. Depending on the data type and analysis requirements, diverse approaches such as aggregation, reduction of unnecessary details, smart indexing, and other techniques can be implemented. 

In our upcoming blog posts, we'll explore effective strategies for optimizing observability data analysis and storage.

Stay tuned!

Observe. Analyze. Automate

Observe. Analyze. Automate

Observe. Analyze. Automate