使用docker安装prometeus和grafana

1.拉取镜像

docker pull prom/prometheus
docker pull prom/pushgateway
docker pull grafana/grafana

2.部署prometheus

2.1创建 prometheus.yaml

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
      - targets: ['localhost:9090']

2.2 启动prometheus.yaml

docker run -d \
    -p 9090:9090 \
    -v /Users/wangdongxing/docker/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
    -v /Users/wangdongxing/docker/prometheus/data:/opt/prometheus/data \
    prom/prometheus

启动后,可以localhost:9090 查看页面

3.启动pushgateway

3.1 启动

docker run -d -p 9091:9091 prom/pushgateway

启动后,可以localhost:9091 查看页面

3.2 测试数据

推送pushgateway

echo "word_count 1" | curl --data-binary @- http://localhost:9091/metrics/job/wdx_job
echo "word_count 1" | curl --data-binary @- http://localhost:9091/metrics/job/wdx_job/instance/wdx_instance1

测试推送更为复杂的数据

cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/wdx_job/instance/wdx_instance1
  # TYPE word_count counter
  word_count{label="tag1"} 42
  # TYPE another_metric gauge
  # HELP another_metric Just an example.
  another_metric 2398.283
  EOF

删除指标

curl -X DELETE http://localhost:9091/metrics/job/wdx_job
curl -X DELETE http://localhost:9091/metrics/job/wdx_job/instance/wdx_instance1

3.3 在prometheus上增加Job

编辑prometheus.yaml,添加Job

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
      - targets: ['localhost:9090']
config.
  - job_name: 'wdx_job'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
      - targets: ['host.docker.internal:9091']  #docker容器间可以借助host.docker.internal,但是线上环境应该使用可访问的ip + 端口

4.启动grafana

docker run -d --name=grafana -p 3000:3000 grafana/grafana

启动后,可以localhost:3000 查看页面

grafana的操作,可以参考文档:https://blog.csdn.net/weixin_52270081/article/details/125845193

5.注意问题

使用docker部署,三个服务启动以后,容器之间无法互通,需要借助 宿主机host:端口进行访问

host.docker.internal:端口
posted on 2023-03-15 14:36  王冬冬冬不烦恼  阅读(102)  评论(0编辑  收藏  举报