Build Telemetry for Distributed Services之Jaeger

github链接:https://github.com/jaegertracing/jaeger

官网:https://www.jaegertracing.io/

 

Jaeger: open source, end-to-end distributed tracing

Monitor and troubleshoot transactions in complex distributed systems

 a Cloud Native Computing Foundation incubating project.

 

Uber published a blog post, Evolving Distributed Tracing at Uber, where they explain the history and reasons for the architectural choices made in Jaeger. Yuri Shkuro, creator of Jaeger, also published a book Mastering Distributed Tracing that covers in-depth many aspects of Jaeger design and operation, as well as distributed tracing in general.

 

Why Jaeger?

As on-the-ground microservice practitioners are quickly realizing, the majority of operational problems that arise when moving to a distributed architecture are ultimately grounded in two areas: networking and observability. It is simply an orders of magnitude larger problem to network and debug a set of intertwined distributed services versus a single monolithic application.

 

Problems that Jaeger addresses

It is used for monitoring and troubleshooting microservices-based distributed systems, including:

  • Distributed context propagation
  • Distributed transaction monitoring
  • Root cause analysis
  • Service dependency analysis
  • Performance / latency optimization

 

Kubernetes and OpenShift

 

Features

  • Discover architecture of the whole system via data-driven dependency diagram.
  • View request timeline and errors; understand how the app works.
  • Find sources of latency and lack of concurrency.
  • Highly contextualized logging.
  • Use baggage propagation to:

    • Diagnose inter-request contention (queueing).
    • Attribute time spent in a service.
  • Use open source libraries with OpenTracing integration to get vendor-neutral instrumentation for free.

Features

  • OpenTracing compatible data model and instrumentation libraries
  • Uses consistent upfront sampling with individual per service/endpoint probabilities
  • Multiple storage backends: Cassandra, Elasticsearch, memory.
  • Adaptive sampling (coming soon)
  • Post-collection data processing pipeline (coming soon)

 

Technical Specs

 

Span

A span represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships.

 

Trace

A trace is a data/execution path through the system, and can be thought of as a directed acyclic graph of spans

 

Query

Query is a service that retrieves traces from storage and hosts a UI to display them

 参考:

the OpenTracing standard

 

Components

Jaeger can be deployed either as all-in-one binary, where all Jaeger backend components run in a single process, or as a scalable distributed system, discussed below. There two main deployment options:

  1. Collectors are writing directly to storage.
  2. Collectors are writing to Kafka as a preliminary buffer.

 Illustration of direct-to-storage architecture

 

Illustration of architecture with Kafka as intermediate buffer

This section details the constituent parts of Jaeger and how they relate to each other. It is arranged by the order in which spans from your application interact with them.

 

Jaeger client libraries

Jaeger clients are language specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Flask, Dropwizard, gRPC, and many more, that are already integrated with OpenTracing.

An instrumented service creates spans when receiving new requests and attaches context information (trace id, span id, and baggage) to outgoing requests. Only ids and baggage are propagated with requests; all other information that compose a span like operation name, logs, etc. are not propagated. Instead sampled spans are transmitted out of process asynchronously, in the background, to Jaeger Agents.

The instrumentation has very little overhead, and is designed to be always enabled in production.

Note that while all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. By default, Jaeger client samples 0.1% of traces (1 in 1000), and has the ability to retrieve sampling strategies from the agent.

 

Agent

The Jaeger agent is a network daemon that listens for spans sent over UDP, which it batches and sends to the collector. It is designed to be deployed to all hosts as an infrastructure component. The agent abstracts the routing and discovery of the collectors away from the client.

 

Collector

 

The Jaeger collector receives traces from Jaeger agents and runs them through a processing pipeline. Currently our pipeline validates traces, indexes them, performs any transformations, and finally stores them.

Jaeger’s storage is a pluggable component which currently supports CassandraElasticsearch and Kafka

 

Ingester

Ingester is a service that reads from Kafka topic and writes to another storage backend (Cassandra, Elasticsearch)

 

Monitoring Jaeger

Jaeger itself is a distributed, microservices based system. If you run it in production, you will likely want to setup adequate monitoring for different components, e.g. to ensure that the backend is not saturated by too much tracing data

Metrics

By default Jaeger microservices expose metrics in Prometheus format. It is controlled by the following command line options:

  • --metrics-backend controls how the measurements are exposed. The default value is prometheus, another option is expvar, the Go standard mechanism for exposing process level statistics.
  • --metrics-http-route specifies the name of the HTTP endpoint used to scrape the metrics (/metrics by default).

Each Jaeger component exposes the metrics scraping endpoint on one of the HTTP ports they already serve:

ComponentPort
jaeger-agent 14271
jaeger-collector 14269
jaeger-query 16687
jaeger-ingester 14270

Logging

Jaeger components only log to standard out, using structured logging library go.uber.org/zap configured to write log lines as JSON encoded strings, for example:

{"level":"info","ts":1517621222.261759,"caller":"healthcheck/handler.go:99","msg":"Health Check server started","http-port":14269,"status":"unavailable"}

The log level can be adjusted via --log-level command line switch; default level is info.  

posted @ 2019-09-25 12:11  PanPan003  阅读(389)  评论(0编辑  收藏  举报