In today's rapid software development landscape, efficient log management is crucial for maintaining system reliability and performance. With expanding and complex infrastructure and application components, the responsibilities of operations and development teams are ever-growing and multifaceted. This blog post outlines best practices for effective log management, addressing the challenges of growing data volumes, complex infrastructures, and the need for quick problem resolution.
Understanding Logs and Their Importance
Logs are records of events occurring within your infrastructure, typically including a timestamp, a message detailing the event, and metadata identifying the source. They are invaluable for diagnosing issues, providing early warnings, and speeding up problem resolution. Logs are often the primary signal that developers enable, offering significant detail for debugging, performance analysis, security, and compliance management.
The Logging Journey
The logging journey involves three basic steps: collection and ingestion, processing and enrichment, and analysis and rationalization. Let's explore each step in detail, covering some of the best practices for each section.
1. Log Collection and Ingestion
Collect Everything Relevant and Actionable
The first step is to collect all logs into a central location. This involves identifying all your applications and systems and collecting their logs. Comprehensive data collection ensures no critical information is missed, providing a complete picture of your system's behavior. In the event of an incident, having all logs in one place can significantly reduce the time to resolution. It's generally better to collect more data than you need, as you can always filter out irrelevant information later, as well as delete logs that are no longer needed more quickly.
Leverage Integrations
Elastic provides over 300 integrations that simplify data onboarding. These integrations not only collect data but also come with dashboards, saved searches, and pipelines to parse the data. Utilizing these integrations can significantly reduce manual effort and ensure data consistency.
Consider Ingestion Capacity and Costs
An important aspect of log collection is ensuring you have sufficient ingestion capacity at a manageable cost. When assessing solutions, be cautious about those that charge significantly more for high cardinality data, as this can lead to unexpectedly high costs in observability solutions. We'll talk more about cost effective log management later in this post.
Use Kafka for Large Projects
For larger organizations, implementing Kafka can improve log data management. Kafka acts as a buffer, making the system more reliable and easier to manage. It allows different teams to send data to a centralized location, which can then be ingested into Elastic.
2. Processing and Enrichment
Adopt Elastic Common Schema (ECS)
One key aspect of log collection is to have the most amount of normalization across all of your applications and infrastructure. Having a common semantic schema is crucial. Elastic contributed Elastic Common Schema (ECS) to OpenTelemetry (OTel), helping accelerate the adoption of OTel-based observability and security. This move towards a more normalized way to define and ingest logs (including metrics and traces) is beneficial for the industry.
Using ECS helps standardize field names and data structures, making data analysis and correlation easier. This common schema ensures your data is organized predictably, facilitating more efficient querying and reporting. Learn more about ECS here.
Optimize Mappings for High Volume Data
For high cardinality fields or those rarely used, consider optimizing or removing them from the index. This can improve performance by reducing the amount of data that needs to be indexed and searched. Our documentation has sections to tune your setup for disk usage, search speed and indexing speed.
Managing Structured vs. Unstructured Logs
Structured logs are generally preferable as they offer more value and are easier to work with. They have a predefined format and fields, simplifying information extraction and analysis. For custom logs without pre-built integrations, you may need to define your own parsing rules.
For unstructured logs, full-text search capabilities can help mitigate limitations. By indexing logs, full-text search allows users to search for specific keywords or phrases efficiently, even within large volumes of unstructured data. This is one of the main differentiators of Elastic's observability solution. You can simply search for any keyword or phrase and get results in real-time, without needing to write complex regular expressions or parsing rules at query time.
Schema-on-Read vs. Schema-on-Write
There are two main approaches to processing log data:
-
Schema-on-read: Some observability dashboarding capabilities can perform runtime transformations to extract fields from non-parsed sources on the fly. This is helpful when dealing with legacy systems or custom applications that may not log data in a standardized format. However, runtime parsing can be time-consuming and resource-intensive, especially for large volumes of data.
-
Schema-on-write: This approach offers better performance and more control over the data. The schema is defined upfront, and the data is structured and validated at the time of writing. This allows for faster processing and analysis of the data, which is beneficial for enrichment.
3. Analysis and Rationalization
Full-Text Search
Elastic's full-text search capabilities, powered by elasticsearch, allow you to quickly find relevant logs. The Kibana Query Language (KQL) enhances search efficiency, enabling you to filter and drill down into the data to identify issues rapidly.
Here are a few examples of KQL queries:
// Filter documents where a field exists
http.request.method: *
// Filter documents that match a specific value
http.request.method: GET
// Search all fields for a specific value
Hello
// Filter documents where a text field contains specific terms
http.request.body.content: "null pointer"
// Filter documents within a range
http.response.bytes < 10000
// Combine range queries
http.response.bytes > 10000 and http.response.bytes <= 20000
// Use wildcards to match patterns
http.response.status_code: 4*
// Negate a query
not http.request.method: GET
// Combine multiple queries with AND/OR
http.request.method: GET and http.response.status_code: 400
Machine Learning Integration
Machine learning can automate the detection of anomalies and patterns within your log data. Elastic offers features like log rate analysis that automatically identify deviations from normal behavior. By leveraging machine learning, you can proactively address potential issues before they escalate.
It is recommended that organizations utilize a diverse arsenal of machine learning algorithms and techniques to effectively uncover unknown-unknowns in log files. Unsupervised machine learning algorithms, should be employed for anomaly detection on real-time data, with rate-controlled alerting based on severity.
By automatically identifying influencers, users can gain valuable context for automated root cause analysis (RCA). Log pattern analysis brings categorization to unstructured logs, while log rate analysis and change point detection help identify the root causes of spikes in log data.
Take a look at the documentation to get started with machine learning in Elastic.
Dashboarding and Alerting
Building dashboards and setting up alerting helps you monitor your logs in real-time. Dashboards provide a visual representation of your logs, making it easier to identify patterns and anomalies. Alerting can notify you when specific events occur, allowing you to take action quickly.
Cost-Effective Log Management
Use Data Tiers
Implementing index lifecycle management to move data across hot, warm, cold, and frozen tiers can significantly reduce storage costs. This approach ensures that only the most frequently accessed data is stored on expensive, high-performance storage, while older data is moved to more cost-effective storage solutions.
Our documentation explains how to set up Index Lifecycle Management.
Compression and Index Sorting
Applying best compression settings and using index sorting can further reduce the data footprint. Optimizing the way data is stored on disk can lead to substantial savings in storage costs and improve retrieval performance. As of 8.15, elasticsearch provides an indexing mode called "logsdb". This is a highly optimized way of storing log data. This new way of indexing data uses 2.5 times less disk space than the default mode. You can read more about it here. This mode automatically applies the best combination of settings for compression, index sorting, and other optimizations that weren't accessible to users before.
Snapshot Lifecycle Management (SLM)
SLM allows you to back up your data and delete it from the main cluster, freeing up resources. If needed, data can be restored quickly for analysis, ensuring that you maintain the ability to investigate historical events without incurring high storage costs.
Learn more about SLM in the documentation.
Dealing with Large Amounts of Log Data
Managing large volumes of log data can be challenging. Here are some strategies to optimize log management:
- Develop a logs deletion policy. Evaluate what data to collect and when to delete it.
- Consider discarding DEBUG logs or even INFO logs earlier, and delete dev and staging environment logs sooner.
- Aggregate short windows of identical log lines, which is especially useful for TCP security event logging.
- For applications and code you control, consider moving some logs into traces to reduce log volume while maintaining detailed information.
Centralized vs. Decentralized Log Storage
Data locality is an important consideration when managing log data. The costs of ingressing and egressing large amounts of log data can be prohibitively high, especially when dealing with cloud providers.
In the absence of regional redundancy requirements, your organization may not need to send all log data to a central location. Consider keeping log data local to the datacenter where it was generated to reduce ingress and egress costs.
Cross-cluster search functionality enables users to search across multiple logging clusters simultaneously, reducing the amount of data that needs to be transferred over the network.
Cross-cluster replication is useful for maintaining business continuity in the event of a disaster, ensuring data availability even during an outage in one datacenter.
Monitoring and Performance
Monitor Your Log Management System
Using a dedicated monitoring cluster can help you track the performance of your Elastic deployment. Stack monitoring provides metrics on search and indexing activity, helping you identify and resolve performance bottlenecks.
Adjust Bulk Size and Refresh Interval
Optimizing these settings can balance performance and resource usage. Increasing bulk size and refresh interval can improve indexing efficiency, especially for high-throughput environments.
Logging Best Practices
Adjust Log Levels
Ensure that log levels are appropriately set for all applications. Customize log formats to facilitate easier ingestion and analysis. Properly configured log levels can reduce noise and make it easier to identify critical issues.
Use Modern Logging Frameworks
Implement logging frameworks that support structured logging. Adding metadata to logs enhances their usefulness for analysis. Structured logging formats, such as JSON, allow logs to be easily parsed and queried, improving the efficiency of log analysis. If you fully control the application and are already using structured logging, consider using Elastic's version of these libraries, which can automatically parse logs into ECS fields.
Leverage APM and Metrics
For custom-built applications, Application Performance Monitoring (APM) provides deeper insights into application performance, complementing traditional logging. APM tracks transactions across services, helping you understand dependencies and identify performance bottlenecks.
Consider collecting metrics alongside logs. Metrics can provide insights into your system's performance, such as CPU usage, memory usage, and network traffic. If you're already collecting logs from your systems, adding metrics collection is usually a quick process.
Traces can provide even deeper insights into specific transactions or request paths, especially in cloud-native environments. They offer more contextual information and excel at tracking dependencies across services. However, implementing tracing is only possible for applications you own, and not all developers have fully embraced it yet.
A combined logging and tracing strategy is recommended, where traces provide coverage for newer instrumented apps, and logging supports legacy applications and systems you don't own the source code for.
Conclusion
Effective log management is essential for maintaining system reliability and performance in today's complex software environments. By following these best practices, you can optimize your log management process, reduce costs, and improve problem resolution times.
Key takeaways include:
- Ensure comprehensive log collection with a focus on normalization and common schemas.
- Use appropriate processing and enrichment techniques, balancing between structured and unstructured logs.
- Leverage full-text search and machine learning for efficient log analysis.
- Implement cost-effective storage strategies and smart data retention policies.
- Enhance your logging strategy with APM, metrics, and traces for a complete observability solution.
Continuously evaluate and adjust your strategies to keep pace with the growing volume and complexity of log data, and you'll be well-equipped to ensure the reliability, performance, and security of your applications and infrastructure.
Check out our other blogs:
- Build better Service Level Objectives (SLOs) from logs and metrics
- AWS VPC Flow log analysis with GenAI in Elastic
- Migrating 1 billion log lines from OpenSearch to elasticsearch
- Pruning incoming log volumes with Elastic
Ready to get started? Use Elastic Observability on Elastic Cloud — the hosted elasticsearch service that includes all of the latest features.
The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.