Kubernetes集群监控之Prometheus Operator

[TOC]

参考

Kubernetes Operator 介绍

Operator是由CoreOS公司开发的,用来扩展 Kubernetes API,特定的应用程序控制器,它用来创建、配置和管理复杂的有状态应用,如数据库、缓存和监控系统。Operator基于 Kubernetes 的资源和控制器概念之上构建,但同时又包含了应用程序特定的一些专业知识,比如创建一个数据库的Operator,则必须对创建的数据库的各种运维方式非常了解,创建Operator的关键是CRD(自定义资源)的设计。

CRD是对 Kubernetes API 的扩展,Kubernetes 中的每个资源都是一个 API 对象的集合,例如我们在YAML文件里定义的那些spec都是对 Kubernetes 中的资源对象的定义,所有的自定义资源可以跟 Kubernetes 中内建的资源一样使用 kubectl 操作。

Operator是将运维人员对软件操作的知识给代码化,同时利用 Kubernetes 强大的抽象来管理大规模的软件应用。目前CoreOS官方提供了几种Operator的实现,其中就包括我们今天的主角:Prometheus OperatorOperator的核心实现就是基于 Kubernetes 的以下两个概念:

  • 资源:对象的状态定义
  • 控制器:观测、分析和行动,以调节资源的分布

当然我们如果有对应的需求也完全可以自己去实现一个Operator,接下来我们就来给大家详细介绍下Prometheus-Operator的使用方法。

Prometheus Operator介绍

Kubernetes的Prometheus Operator为Kubernetes服务和Prometheus实例的部署和管理提供了简单的监控定义。

安装完毕后,Prometheus Operator提供了以下功能:

  • 创建/毁坏: 在Kubernetes namespace中更容易启动一个Prometheus实例,一个特定的应用程序或团队更容易使用Operator。
  • 简单配置: 配置Prometheus的基础东西,比如在Kubernetes的本地资源versions, persistence, retention policies, 和replicas。
  • Target Services通过标签: 基于常见的Kubernetes label查询,自动生成监控target 配置;不需要学习普罗米修斯特定的配置语言。

Prometheus Operator 架构图如下:

image

以上架构中的各组成部分以不同的资源方式运行在 Kubernetes 集群中,它们各自有不同的作用:

  • Operator: Operator 资源会根据自定义资源(Custom Resource Definition / CRDs)来部署和管理 Prometheus Server,同时监控这些自定义资源事件的变化来做相应的处理,是整个系统的控制中心。
  • Prometheus: Prometheus 资源是声明性地描述 Prometheus 部署的期望状态。
  • Prometheus Server: Operator 根据自定义资源 Prometheus 类型中定义的内容而部署的 Prometheus Server 集群,这些自定义资源可以看作是用来管理 Prometheus Server 集群的 StatefulSets 资源。
  • ServiceMonitor: ServiceMonitor 也是一个自定义资源,它描述了一组被 Prometheus 监控的 targets 列表。该资源通过 Labels 来选取对应的 Service Endpoint,让 Prometheus Server 通过选取的 Service 来获取 Metrics 信息。
  • Service: Service 资源主要用来对应 Kubernetes 集群中的 Metrics Server Pod,来提供给 ServiceMonitor 选取让 Prometheus Server 来获取信息。简单的说就是 Prometheus 监控的对象,例如 Node Exporter Service、Mysql Exporter Service 等等。
  • Alertmanager: Alertmanager 也是一个自定义资源类型,由 Operator 根据资源描述内容来部署 Alertmanager 集群。

为什么需要prometheus-operator

因为是prometheus主动去拉取的,所以在k8s里pod因为调度的原因导致pod的ip会发生变化,人工不可能去维持,自动发现有基于DNS的,但是新增还是有点麻烦。

Prometheus-operator的本职就是一组用户自定义的CRD资源以及Controller的实现,Prometheus Operator这个controller有BRAC权限下去负责监听这些自定义资源的变化,并且根据这些资源的定义自动化的完成如Prometheus Server自身以及配置的自动化管理工作。

在Kubernetes中我们使用Deployment、DamenSet、StatefulSet来管理应用Workload,使用Service、Ingress来管理应用的访问方式,使用ConfigMap和Secret来管理应用配置。我们在集群中对这些资源的创建,更新,删除的动作都会被转换为事件(Event),Kubernetes的Controller Manager负责监听这些事件并触发相应的任务来满足用户的期望。这种方式我们成为声明式,用户只需要关心应用程序的最终状态,其它的都通过Kubernetes来帮助我们完成,通过这种方式可以大大简化应用的配置管理复杂度。

而除了这些原生的Resource资源以外,Kubernetes还允许用户添加自己的自定义资源(Custom Resource)。并且通过实现自定义Controller来实现对Kubernetes的扩展,不需要用户去二开k8s也能达到给k8s添加功能和对象。

因为svc的负载均衡,所以在K8S里监控metrics基本最小单位都是一个svc背后的pod为target,所以prometheus-operator创建了对应的CRD: kind: ServiceMonitor ,创建的ServiceMonitor里声明需要监控选中的svc的label以及metrics的url路径的和namespaces即可。

什么是metrics

例如我们要查看etcd的metrics,先查看etcd的运行参数找到相关的值,这里我是所有参数写在一个yml文件里,非yml自行查看systemd文件或者运行参数找到相关参数和值即可。

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-m1 ~]# ps aux | grep -P '/etc[d] '
root 13531 2.8 0.8 10631072 140788 ? Ssl 2018 472:58 /usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
[root@k8s-m1 ~]# cat /etc/etcd/etcd.config.yml
...
listen-client-urls: 'https://172.16.0.2:2379'
...
client-transport-security:
ca-file: '/etc/etcd/ssl/etcd-ca.pem'
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
...

我们需要两部分信息:

  • listen-client-urls的httpsurl,我这里是https://172.16.0.2:2379
  • 允许客户端证书信息

然后使用下面的curl,带上各自证书路径访问https的url执行

1
curl --cacert /etc/etcd/ssl/etcd-ca.pem --cert /etc/etcd/ssl/etcd.pem --key /etc/etcd/ssl/etcd-key.pem https://172.16.0.2:2379/metrics

也可以etcd用选项和值--listen-metrics-urls http://interface_IP:port设置成非https的metrics端口可以不用证书即可访问,我们会看到etcd的metrics输出信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
....
grpc_server_started_total{grpc_method="RoleList",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="RoleRevokePermission",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="Snapshot",grpc_service="etcdserverpb.Maintenance",grpc_type="server_stream"} 0
grpc_server_started_total{grpc_method="Status",grpc_service="etcdserverpb.Maintenance",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="Txn",grpc_service="etcdserverpb.KV",grpc_type="unary"} 259160
grpc_server_started_total{grpc_method="UserAdd",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="UserChangePassword",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="UserDelete",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="UserGet",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="UserGrantRole",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="UserList",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="UserRevokeRole",grpc_service="etcdserverpb.Auth",grpc_type="unary"} 0
grpc_server_started_total{grpc_method="Watch",grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"} 86
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 28145.45
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 65536
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 121
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.46509824e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.54557786888e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.0886217728e+10

同理kube-apiserver也有metrics信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ kubectl get --raw /metrics
...
rest_client_request_latency_seconds_bucket{url="https://[::1]:6443/apis?timeout=32s",verb="GET",le="0.512"} 39423
rest_client_request_latency_seconds_bucket{url="https://[::1]:6443/apis?timeout=32s",verb="GET",le="+Inf"} 39423
rest_client_request_latency_seconds_sum{url="https://[::1]:6443/apis?timeout=32s",verb="GET"} 24.781942557999795
rest_client_request_latency_seconds_count{url="https://[::1]:6443/apis?timeout=32s",verb="GET"} 39423
# HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host.
# TYPE rest_client_requests_total counter
rest_client_requests_total{code="200",host="[::1]:6443",method="GET"} 2.032031e+06
rest_client_requests_total{code="200",host="[::1]:6443",method="PUT"} 1.106921e+06
rest_client_requests_total{code="201",host="[::1]:6443",method="POST"} 38
rest_client_requests_total{code="401",host="[::1]:6443",method="GET"} 17378
rest_client_requests_total{code="404",host="[::1]:6443",method="GET"} 3.546509e+06
rest_client_requests_total{code="409",host="[::1]:6443",method="POST"} 29
rest_client_requests_total{code="409",host="[::1]:6443",method="PUT"} 20
rest_client_requests_total{code="422",host="[::1]:6443",method="POST"} 1
rest_client_requests_total{code="503",host="[::1]:6443",method="GET"} 5
# HELP ssh_tunnel_open_count Counter of ssh tunnel total open attempts
# TYPE ssh_tunnel_open_count counter
ssh_tunnel_open_count 0
# HELP ssh_tunnel_open_fail_count Counter of ssh tunnel failed open attempts
# TYPE ssh_tunnel_open_fail_count counter
ssh_tunnel_open_fail_count 0

这种就是prometheus的定义的metrics格式规范,缺省是在http(s)的url的/metrics输出。 而metrics要么程序定义输出(模块或者自定义开发),要么用官方的各种exporter(node-exporter,mysqld-exporter,memcached_exporter…)采集要监控的信息占用一个web端口然后输出成metrics格式的信息,prometheus server去收集各个target的metrics存储起来(tsdb)。 用户可以在prometheus的http页面上用promQL(prometheus的查询语言)或者(grafana数据来源就是用)api去查询一些信息,也可以利用pushgateway去统一采集然后prometheus从pushgateway采集(所以pushgateway类似于zabbix的proxy)

部署

使用helm安装部署

拉取安装包

1
2
helm search prometheus-operator
helm fetch stable/prometheus-operator

解压之后修改values文件(请参考官方github)

1
helm install --name prometheus-operator --namespace monitoring -f values.yaml ./

源码部署

先获取相关文件后面跟着文件来讲,直接用git客户端拉取即可,不过文件大概30多M,没梯子基本拉不下来。

1
git clone https://github.com/coreos/prometheus-operator.git

拉取不下来可以在katacoda的网页上随便一个课程的机器都有docker客户端,可以git clone下来后把文件构建进一个alpine镜像然后推到dockerhub上,再在自己的机器docker run这个镜像的时候docker cp到宿主机上。

Prometheus Operator引入的自定义资源包括:

  • Prometheus
  • ServiceMonitor
  • Alertmanager

用户创建了prometheus-operator(也就是上面监听三个CRD的各种事件的controller)后,用户可以利用kind: Prometheus这种声明式创建对应的资源。 下面我们部署简单的例子学习prometheus-operator

创建prometheus-operator的pod

拉取到文件后我们先创建prometheus-operator:

1
2
3
4
5
6
$ cd prometheus-operator
$ kubectl apply -f bundle.yaml
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
serviceaccount/prometheus-operator created

确认pod运行,以及我们可以发现operator的pod在有RBAC下创建了一个APIService:

1
2
3
4
5
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
prometheus-operator-6db8dbb7dd-djj6s 1/1 Running 0 1m
$ kubectl get APIService | grep monitor
v1.monitoring.coreos.com 2018-10-09T10:49:47Z

查看这个APISerivce

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
$ kubectl get --raw /apis/monitoring.coreos.com/v1
{
"kind": "APIResourceList"
"apiVersion": "v1"
"groupVersion": "monitoring.coreos.com/v1"
"resources": [
{
"name": "alertmanagers"
"singularName": "alertmanager"
"namespaced": true
"kind": "Alertmanager"
"verbs": [
"delete"
"deletecollection"
"get"
"list"
"patch"
"create"
"update"
"watch"
]
},
{
"name": "prometheuses"
"singularName": "prometheus"
"namespaced": true
"kind": "Prometheus"
"verbs": [
"delete"
"deletecollection"
"get"
"list"
"patch"
"create"
"update"
"watch"
]
},
{
"name": "servicemonitors"
"singularName": "servicemonitor"
"namespaced": true
"kind": "ServiceMonitor"
"verbs": [
"delete"
"deletecollection"
"get"
"list"
"patch"
"create"
"update"
"watch"
]
},
{
"name": "prometheusrules"
"singularName": "prometheusrule"
"namespaced": true
"kind": "PrometheusRule"
"verbs": [
"delete"
"deletecollection"
"get"
"list"
"patch"
"create"
"update"
"watch"
]
}
]
}

这个是因为bundle.yml里有如下的CLusterRole和对应的ClusterRoleBinding来让prometheus-operator有权限对monitoring.coreos.com这个apiGroup里的这些CRD进行所有操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-operator
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanagers
- prometheuses
- prometheuses/finalizers
- alertmanagers/finalizers
- servicemonitors
- prometheusrules
verbs:
- '*'

同时我们查看到pod里的log发现operator也在集群里创建了对应的CRD

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ kubectl logs prometheus-operator-6db8dbb7dd-dkhxc
ts=2018-10-09T11:21:09.389340424Z caller=main.go:165 msg="Starting Prometheus Operator version '0.26.0'."
level=info ts=2018-10-09T11:21:09.491464524Z caller=operator.go:377 component=prometheusoperator msg="connection established" cluster-version=v1.11.3
level=info ts=2018-10-09T11:21:09.492679498Z caller=operator.go:209 component=alertmanageroperator msg="connection established" cluster-version=v1.11.3
level=info ts=2018-10-09T11:21:12.085147219Z caller=operator.go:624 component=alertmanageroperator msg="CRD created" crd=Alertmanager
level=info ts=2018-10-09T11:21:12.085265548Z caller=operator.go:1420 component=prometheusoperator msg="CRD created" crd=Prometheus
level=info ts=2018-10-09T11:21:12.099210714Z caller=operator.go:1420 component=prometheusoperator msg="CRD created" crd=ServiceMonitor
level=info ts=2018-10-09T11:21:12.118721976Z caller=operator.go:1420 component=prometheusoperator msg="CRD created" crd=PrometheusRule
level=info ts=2018-10-09T11:21:15.182780757Z caller=operator.go:225 component=alertmanageroperator msg="CRD API endpoints ready"
level=info ts=2018-10-09T11:21:15.383456425Z caller=operator.go:180 component=alertmanageroperator msg="successfully synced all caches"
$ kubectl get crd
NAME CREATED AT
alertmanagers.monitoring.coreos.com 2018-10-09T11:21:11Z
prometheuses.monitoring.coreos.com 2018-10-09T11:21:11Z
prometheusrules.monitoring.coreos.com 2018-10-09T11:21:12Z
servicemonitors.monitoring.coreos.com 2018-10-09T11:21:12Z

相关CRD介绍

这四个CRD作用如下

  • Prometheus: 由 Operator 依据一个自定义资源kind: Prometheus类型中,所描述的内容而部署的 Prometheus Server 集群,可以将这个自定义资源看作是一种特别用来管理Prometheus Server的StatefulSets资源。
  • ServiceMonitor: 一个Kubernetes自定义资源(和kind: Prometheus一样是CRD),该资源描述了Prometheus Server的Target列表,Operator 会监听这个资源的变化来动态的更新Prometheus Server的Scrape targets并让prometheus server去reload配置(prometheus有对应reload的http接口/-/reload)。而该资源主要通过Selector来依据 Labels 选取对应的Service的endpoints,并让 Prometheus Server 通过 Service 进行拉取(拉)指标资料(也就是metrics信息),metrics信息要在http的url输出符合metrics格式的信息,ServiceMonitor也可以定义目标的metrics的url。
  • Alertmanager:Prometheus Operator 不只是提供 Prometheus Server 管理与部署,也包含了 AlertManager,并且一样通过一个 kind: Alertmanager 自定义资源来描述信息,再由 Operator 依据描述内容部署 Alertmanager 集群。
  • PrometheusRule:对于Prometheus而言,在原生的管理方式上,我们需要手动创建Prometheus的告警文件,并且通过在Prometheus配置中声明式的加载。而在Prometheus Operator模式中,告警规则也编程一个通过Kubernetes API 声明式创建的一个资源.告警规则创建成功后,通过在Prometheus中使用想servicemonitor那样用ruleSelector通过label匹配选择需要关联的PrometheusRule即可。

部署kind: Prometheus

现在我们有了prometheus这个CRD,我们部署一个prometheus server只需要如下声明即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceMonitorSelector:
matchLabels:
team: frontend
serviceAccountName: prometheus
resources:
requests:
memory: 400Mi
EOF

因为负载均衡,一个svc下的一组pod是监控的最小单位,要监控一个svc的metrics就声明创建一个servicemonitors即可。

部署一组pod及其svc

首先,我们部署一个带metrics输出的简单程序的deploy,该镜像里的主进程会在8080端口上输出metrics信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ cat<<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: zhangguanzhang/instrumented_app
ports:
- name: web
containerPort: 8080
EOF

创建对应的svc。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat<<EOF | kubectl apply -f -
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
EOF

部署kind: ServiceMonitor

现在创建一个ServiceMonitor来告诉prometheus server需要监控带有label app: example-app的svc背后的一组pod的metrics。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat<<EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
EOF

默认情况下ServiceMonitor和监控对象必须是在相同Namespace下的,如果要关联非同ns下需要下面这样设置值。

1
2
3
4
spec:
namespaceSelector:
matchNames:
- target_ns_name

如果希望ServiceMonitor可以关联任意命名空间下的标签,则通过以下方式定义:

1
2
3
spec:
namespaceSelector:
any: true

如果需要监控的Target对象启用了BasicAuth认证,那在定义ServiceMonitor对象时,可以使用endpoints配置中定义basicAuth如下所示basicAuth中的passwordusername值来源于同ns下的一个名为basic-auth的Secret。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
spec
endpoints:
- basicAuth:
password:
name: basic-auth
key: password
username:
name: basic-auth
key: user
port: web
---
apiVersion: v1
kind: Secret
metadata:
name: basic-auth
type: Opaque
data:
user: dXNlcgo= # base64编码后的用户名
password: cGFzc3dkCg== # base64编码后的密码

上面要注意的是我创建prometheus server的时候有如下值。

1
2
3
serviceMonitorSelector:
matchLabels:
team: frontend

该值字面意思可以知道就是指定prometheus server去选择哪些ServiceMonitor,这个概念和svc去选择pod一样,可能一个集群跑很多prometheus server来监控各自选中的ServiceMonitor,如果想一个prometheus server监控所有的则spec.serviceMonitorSelector: {}为空即可,而namespaces的范围同样的设置spec.serviceMonitorSelector: {},后面官方的prometheus实例里我们可以看到设置了这两个值。

给prometheus server设置相关的RBAC权限。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ cat<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get""list""watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default
EOF

创建svc使用NodePort方便我们访问prometheus的web页面,生产环境不建议使用NodePort

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: prometheus
EOF

打开浏览器访问ip:30900进入target发现已经监听起来了,对应的config里也有配置生成和导入。

先清理掉上面的,然后我们使用官方提供的全套yaml正式部署prometheus-operator

1
2
3
4
5
6
7
8
kubectl delete svc prometheus example-app
kubectl delete ClusterRoleBinding prometheus
kubectl delete ClusterRole prometheus
kubectl delete ServiceMonitor example-app
kubectl delete deploy example-app
kubectl delete sa prometheus
kubectl delete prometheus prometheus
kubectl delete -f bundle.yaml

部署官方的prometheus-operator

官方把所有文件都放在一起,这里我分类下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cd contrib/kube-prometheus/manifests/
mkdir -p operator node-exporter alertmanager grafana kube-state-metrics prometheus serviceMonitor adapter
mv *-serviceMonitor* serviceMonitor/
mv 0prometheus-operator* operator/
mv grafana-* grafana/
mv kube-state-metrics-* kube-state-metrics/
mv alertmanager-* alertmanager/
mv node-exporter-* node-exporter/
mv prometheus-adapter* adapter/
mv prometheus-* prometheus/
$ ll
total 40
drwxr-xr-x 9 root root 4096 Jan 6 14:19 ./
drwxr-xr-x 9 root root 4096 Jan 6 14:15 ../
-rw-r--r-- 1 root root 60 Jan 6 14:15 00namespace-namespace.yaml
drwxr-xr-x 3 root root 4096 Jan 6 14:19 adapter/
drwxr-xr-x 3 root root 4096 Jan 6 14:19 alertmanager/
drwxr-xr-x 2 root root 4096 Jan 6 14:17 grafana/
drwxr-xr-x 2 root root 4096 Jan 6 14:17 kube-state-metrics/
drwxr-xr-x 2 root root 4096 Jan 6 14:18 node-exporter/
drwxr-xr-x 2 root root 4096 Jan 6 14:17 operator/
drwxr-xr-x 2 root root 4096 Jan 6 14:19 prometheus/
drwxr-xr-x 2 root root 4096 Jan 6 14:17 serviceMonitor/

部署operator

先创建ns和operator,quay.io仓库拉取慢,可以使用我脚本拉取,其他镜像也可以这样去拉,不过在apply之前才能拉,一旦被docker接手拉取就只能漫长等。

1
2
3
kubectl apply -f .
curl -s https://zhangguanzhang.github.io/bash/pull.sh | bash -s -- quay.io/coreos/prometheus-operator:v0.26.0
kubectl apply -f operator/

确认状态运行正常再往后执行,这里镜像是quay.io仓库的可能会很慢耐心等待或者自行修改成能拉取到的。

1
2
3
$ kubectl -n monitoring get pod
NAME READY STATUS RESTARTS AGE
prometheus-operator-56954c76b5-qm9ww 1/1 Running 0 24s

部署整套CRD

创建相关的CRD,这里镜像可能也要很久。

1
2
3
4
5
6
7
kubectl apply -f adapter/
kubectl apply -f alertmanager/
kubectl apply -f node-exporter/
kubectl apply -f kube-state-metrics/
kubectl apply -f grafana/
kubectl apply -f prometheus/
kubectl apply -f serviceMonitor/

可以通过get查看整体状态,这里镜像原因会等待很久,我们可以先往后看几个坑的地方。

1
kubectl -n monitoring get all

踩坑

坑一

这里要注意有一个坑,二进制部署k8s管理组件和新版本kubeadm部署的都会发现在prometheus server的页面上发现kube-controllerkube-schedule的target为0/0也就是上图所示。这是因为serviceMonitor是根据label去选取svc的,我们可以看到对应的serviceMonitor是选取的ns范围是kube-system

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ grep -2 selector serviceMonitor/prometheus-serviceMonitorKube*
serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml- matchNames:
serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml- - kube-system
serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml: selector:
serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml- matchLabels:
serviceMonitor/prometheus-serviceMonitorKubeControllerManager.yaml- k8s-app: kube-controller-manager
--
serviceMonitor/prometheus-serviceMonitorKubelet.yaml- matchNames:
serviceMonitor/prometheus-serviceMonitorKubelet.yaml- - kube-system
serviceMonitor/prometheus-serviceMonitorKubelet.yaml: selector:
serviceMonitor/prometheus-serviceMonitorKubelet.yaml- matchLabels:
serviceMonitor/prometheus-serviceMonitorKubelet.yaml- k8s-app: kubelet
--
serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml- matchNames:
serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml- - kube-system
serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml: selector:
serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml- matchLabels:
serviceMonitor/prometheus-serviceMonitorKubeScheduler.yaml- k8s-app: kube-scheduler

而kube-system里默认只有这俩svc,且没有符合上面的label。

1
2
3
4
$ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 139m
kubelet ClusterIP None <none> 10250/TCP 103m

但是却有对应的ep(没有带任何label)被创建,这点想不通官方什么鬼操作,另外这里没有kubelet的ep,我博客部署的二进制的话会有。

1
2
3
4
5
$ kubectl get ep -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 139m
kube-dns 10.244.1.2:53,10.244.8.10:53,10.244.1.2:53 + 1 more... 139m
kube-scheduler <none> 139m

解决办法

所以这里我们创建两个管理组建的svc,名字无所谓,关键是svc的label要能被servicemonitor选中,svc的选择器的label是因为kubeadm的staticPod的label是这样,如果是二进制部署的这俩svc的selector部分不能要。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: kube-controller-manager
labels:
k8s-app: kube-controller-manager
spec:
selector:
component: kube-controller-manager
type: ClusterIP
clusterIP: None
ports:
- name: http-metrics
port: 10252
targetPort: 10252
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: kube-scheduler
labels:
k8s-app: kube-scheduler
spec:
selector:
component: kube-scheduler
type: ClusterIP
clusterIP: None
ports:
- name: http-metrics
port: 10251
targetPort: 10251
protocol: TCP

二进制的话需要我们手动填入svc对应的ep的属性,我集群是HA的,所有有三个,仅供参考,别傻傻得照抄,另外这个ep的名字得和上面的svc的名字和属性对应上:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: Endpoints
metadata:
labels:
k8s-app: kube-controller-manager
name: kube-controller-manager
namespace: kube-system
subsets:
- addresses:
- ip: 172.16.0.2
- ip: 172.16.0.7
- ip: 172.16.0.8
ports:
- name: http-metrics
port: 10252
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
k8s-app: kube-scheduler
name: kube-scheduler
namespace: kube-system
subsets:
- addresses:
- ip: 172.16.0.2
- ip: 172.16.0.7
- ip: 172.16.0.8
ports:
- name: http-metrics
port: 10251
protocol: TCP

这里不知道为啥kubeadm部署的没有kubelet这个ep,我博客二进制部署后是会有kubelet这个ep的(好像metrics server创建的),下面仅供参考,IP根据实际写。另外kubeadm部署下kubelet的readonly的metrics端口(默认是10255)不会开放可以删掉ep的那部分port:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: v1
kind: Endpoints
metadata:
labels:
k8s-app: kubelet
name: kubelet
namespace: kube-system
subsets:
- addresses:
- ip: 172.16.0.14
targetRef:
kind: Node
name: k8s-n2
- ip: 172.16.0.18
targetRef:
kind: Node
name: k8s-n3
- ip: 172.16.0.2
targetRef:
kind: Node
name: k8s-m1
- ip: 172.16.0.20
targetRef:
kind: Node
name: k8s-n4
- ip: 172.16.0.21
targetRef:
kind: Node
name: k8s-n5
ports:
- name: http-metrics
port: 10255
protocol: TCP
- name: cadvisor
port: 4194
protocol: TCP
- name: https-metrics
port: 10250
protocol: TCP

至于prometheus server的服务访问,别再用效率不行的NodePort了,上ingress controller吧,怎么部署参照我博客IngressController

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ing
namespace: monitoring
spec:
rules:
- host: prometheus.monitoring.k8s.local
http:
paths:
- backend:
serviceName: prometheus-k8s
servicePort: 9090
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ing
namespace: monitoring
spec:
rules:
- host: grafana.monitoring.k8s.local
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: alertmanager-ing
namespace: monitoring
spec:
rules:
- host: alertmanager.monitoring.k8s.local
http:
paths:
- backend:
serviceName: alertmanager-main
servicePort: 9093

坑二

访问prometheus server的web页面我们发现即使创建了svc和注入对应ep的信息在target页面发现prometheus server请求被拒绝。

在宿主机上我们发现127.0.0.1才能访问,网卡ip不能访问(这里是另一个环境找的,所以ip是192不是前面的172)

1
2
3
4
5
6
7
8
9
$ hostname -i
192.168.15.223
$ curl -I http://192.168.15.223:10251/metrics
curl: (7) Failed connect to 192.168.15.223:10251; Connection refused
$ curl -I http://127.0.0.1:10251/metrics
HTTP/1.1 200 OK
Content-Length: 30349
Content-Type: text/plain; version=0.0.4
Date: Mon, 07 Jan 2019 13:33:50 GMT

解决办法

修改管理组件bind的ip。

如果使用kubeadm启动的集群,初始化时的config.yml里可以加入如下参数

1
2
3
4
controllerManagerExtraArgs:
address: 0.0.0.0
schedulerExtraArgs:
address: 0.0.0.0

已经启动后的使用下面命令更改就会滚动更新

1
sed -ri '/--address/s#=.+#=0.0.0.0#' /etc/kubernetes/manifests/kube-*

二进制的话查看是不是bind的0.0.0.0如果不是就修改成0.0.0.0,多块网卡如果只想bind一个网卡就写对应的主机上的网卡ip,写0.0.0.0就会监听所有网卡的对应端口。

监控mysql

曾经

想象一下,我们以传统的方式去监控一个mysql服务,首先需要安装mysql-exporter,获取mysql metrics,并且暴露一个端口,等待prometheus服务来拉取监控信息,然后去Prometheus Server的prometheus.yaml文件中在scarpe_config中添加mysql-exporter的job,配置mysql-exporter的地址和端口等信息,再然后,需要重启Prometheus服务,就完成添加一个mysql监控的任务

现在

现在我们以Prometheus-Operator的方式来部署Prometheus,当我们需要添加一个mysql监控我们会怎么做,首先第一步和传统方式一样,部署一个mysql-exporter来获取mysql监控项,然后编写一个ServiceMonitor通过labelSelector选择刚才部署的mysql-exporter,由于Operator在部署Prometheus的时候默认指定了Prometheus选择label为:prometheus: kube-prometheus的ServiceMonitor,所以只需要在ServiceMonitor上打上prometheus: kube-prometheus标签就可以被Prometheus选择了,完成以上两步就完成了对mysql的监控,不需要改Prometheus配置文件,也不需要重启Prometheus服务,是不是很方便,Operator观察到ServiceMonitor发生变化,会动态生成Prometheus配置文件,并保证配置文件实时生效

安装mysql_exporter

获取安装包

1
helm fetch stable/prometheus-mysql-exporter

解压

1
tar xf prometheus-mysql-exporter-0.5.2.tgz

进入解压出来的文件夹修改values文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
# Default values for prometheus-mysql-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
repository: "prom/mysqld-exporter"
tag: "v0.11.0"
pullPolicy: "IfNotPresent"

service:
name: mysql-exporter
type: ClusterIP
externalPort: 9104
internalPort: 9104

serviceMonitor:
# enabled should be set to true to enable prometheus-operator discovery of this service
enabled: true
# interval is the interval at which metrics should be scraped
# interval: 30s
# scrapeTimeout is the timeout after which the scrape is ended
# scrapeTimeout: 10s
# additionalLabels is the set of additional labels to add to the ServiceMonitor
additionalLabels: {}
# release: prometheus
jobLabel: ""
targetLabels: []
podTargetLabels: []

resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 400m
memory: 500Mi
requests:
cpu: 300m
memory: 200Mi

nodeSelector: {}

tolerations: []

affinity: {}

podLabels: {}

annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9104"

collectors: {}
# auto_increment.columns: false
# binlog_size: false
# engine_innodb_status: false
# engine_tokudb_status: false
# global_status: true
# global_variables: true
# info_schema.clientstats: false
# info_schema.innodb_metrics: false
# info_schema.innodb_tablespaces: false
# info_schema.innodb_cmp: false
# info_schema.innodb_cmpmem: false
# info_schema.processlist: false
# info_schema.processlist.min_time: 0
# info_schema.query_response_time: false
# info_schema.tables: true
# info_schema.tables.databases: '*'
# info_schema.tablestats: false
# info_schema.schemastats: false
# info_schema.userstats: false
# perf_schema.eventsstatements: false
# perf_schema.eventsstatements.digest_text_limit: 120
# perf_schema.eventsstatements.limit: false
# perf_schema.eventsstatements.timelimit: 86400
# perf_schema.eventswaits: false
# perf_schema.file_events: false
# perf_schema.file_instances: false
# perf_schema.indexiowaits: false
# perf_schema.tableiowaits: false
# perf_schema.tablelocks: false
# perf_schema.replication_group_member_stats: false
# slave_status: true
# slave_hosts: false
# heartbeat: false
# heartbeat.database: heartbeat
# heartbeat.table: heartbeat

# mysql connection params which build the DATA_SOURCE_NAME env var of the docker container
mysql:
db: ""
host: "192.168.0.1"
param: ""
pass: "*****"
port: 3306
protocol: ""
user: "exporter"
existingSecret: false

# cloudsqlproxy https://cloud.google.com/sql/docs/mysql/sql-proxy
cloudsqlproxy:
enabled: false
image:
repo: "gcr.io/cloudsql-docker/gce-proxy"
tag: "1.14"
pullPolicy: "IfNotPresent"
instanceConnectionName: "project:us-central1:dbname"
port: "3306"
credentials:
'{
"type": "service_account",
"project_id": "project",
"private_key_id": "KEYID1",
"private_key": "-----BEGIN PRIVATE KEY-----\sdajsdnasd\n-----END PRIVATE KEY-----\n",
"client_email": "user@project.iam.gserviceaccount.com",
"client_id": "111111111",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/user%40project.iam.gserviceaccount.com"
}'

安装

1
helm install --name me-release -f values.yaml .

如果values文件中的serviceMonitor.enabledfalse,则需要我们自己创建serviceMonitor

创建监控

接下来让我看看自己如何编写servicemonitor.yaml

接下来编写ServiceMonitor文件,执行命令vim servicemonitor.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor #资源类型为ServiceMonitor
metadata:
labels:
prometheus: kube-prometheus #prometheus默认通过 prometheus: kube-prometheus发现ServiceMonitor,只要写上这个标签prometheus服务就能发现这个ServiceMonitor
name: prometheus-exporter-mysql
spec:
jobLabel: app #jobLabel指定的标签的值将会作为prometheus配置文件中scrape_config下job_name的值,也就是Target,如果不写,默认为service的name
selector:
matchLabels: #该ServiceMonitor匹配的Service的labels,如果使用mathLabels,则下面的所有标签都匹配时才会匹配该service,如果使用matchExpressions,则至少匹配一个标签的service都会被选择
app: prometheus-mysql-exporter # 由于前面查看mysql-exporter的service信息中标签包含了app: prometheus-mysql-exporter这个标签,写上就能匹配到
namespaceSelector:
any: true #表示从所有namespace中去匹配,如果只想选择某一命名空间中的service,可以使用matchNames: []的方式
# mathNames: []
endpoints:
- port: mysql-exporter #前面查看mysql-exporter的service信息中,提供mysql监控信息的端口是Port: mysql-exporter 9104/TCP,所以这里填mysql-exporter
interval: 30s #每30s获取一次信息
# path: /metrics HTTP path to scrape for metrics,默认值为/metrics
honorLabels: true

保存并退出文件,然后执行命令:kubectl create -f servicemonitor.yaml,创建成功之后执行命令kubectl get serviceMonitor查看是否有刚才创建的serviceMonitor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@k8s1:~/mysql-exporter# kubectl create -f servicemonitor.yaml
servicemonitor.monitoring.coreos.com/prometheus-exporter-mysql created
root@k8s1:~/mysql-exporter# kubectl get serviceMonitor
NAME AGE
grafana 4d
kafka-release 5h
kafka-release-exporter 5h
kube-prometheus 23h
kube-prometheus-alertmanager 23h
kube-prometheus-exporter-coredns 23h
kube-prometheus-exporter-kube-controller-manager 23h
kube-prometheus-exporter-kube-etcd 23h
kube-prometheus-exporter-kube-scheduler 23h
kube-prometheus-exporter-kube-state 23h
kube-prometheus-exporter-kubelets 23h
kube-prometheus-exporter-kubernetes 23h
kube-prometheus-exporter-node 23h
prometheus-exporter-mysql 13s
prometheus-operator 4d
root@k8s1:~/mysql-exporter#

可以看到Prometheus-exporter-mysql已经存在了,表示创建成功了,过1分钟左右,在prometheus的界面中查看Targets,可以看到已经成功添加了mysql监控。

上面提到prometheus通过标签prometheus: kube-prometheus选择ServiceMonitor,该配置写在这里, 当然,你可以通过在values.yaml中配置serviceMonitorsSelector来指定按照自己的规则选择serviceMonitor,关于如何配置serviceMonitorsSelector将放在后文统一讲解

动态添加告警规则

当我们动态添加了监控对象,一般会对该对象配置告警规则,采用prometheus-operator的架构模式下,当我们需要动态配置告警规则的时候,可以使用另一种自定义资源(CRD)PrometheusRule,PrometheusRule和ServiceMonitor都是一种自定义资源,ServiceMonitor用于动态添加监控实例,而PrometheusRule则用于动态添加告警规则,下面依然通过动态添加mysql的告警规则为例来演示如何使用PrometheusRule资源。

执行命令vim mysql-rule.yaml,输入以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: monitoring.coreos.com/v1 #这和ServiceMonitor一样
kind: PrometheusRule #该资源类型是Prometheus,这也是一种自定义资源(CRD)
metadata:
labels:
app: "prometheus-rule-mysql"
prometheus: kube-prometheus #同ServiceMonitor,ruleSelector也会默认选择标签为prometheus: kube-prometheus的PrometheusRule资源
name: prometheus-rule-mysql
spec:
groups: #编写告警规则,和prometheus的告警规则语法相同
- name: mysql.rules
rules:
- alert: TooManyErrorFromMysql
expr: sum(irate(mysql_global_status_connection_errors_total[1m])) > 10
labels:
severity: critical
annotations:
description: mysql产生了太多的错误.
summary: TooManyErrorFromMysql
- alert: TooManySlowQueriesFromMysql
expr: increase(mysql_global_status_slow_queries[1m]) > 10
labels:
severity: critical
annotations:
description: mysql一分钟内产生了{{ $value }}条慢查询日志.
summary: TooManySlowQueriesFromMysql

Prometheus选择PrometheusRule资源是通过ruleSelector来选择,默认也是通过标签:prometheus: kube-prometheus来选择,在这里可以看到,ruleSelector和ServiceMonitorsSelector都是可以配置的,如何配置将放在后文的配置统一讲解

保存以上文件之后执行kubectl create -f mysql-rule.yaml,创建成功之后执行命令kubectl get prometheusRule可以看到刚才创建的PrometheusRule资源prometheus-rule-mysql:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@k8s1:~/mysql-exporter# kubectl create -f mysql-rule.yaml
prometheusrule.monitoring.coreos.com/prometheus-rule-mysql created
root@k8s1:~/mysql-exporter# kubectl get prometheusRule
NAME AGE
kube-prometheus 1h
kube-prometheus-alertmanager 1h
kube-prometheus-exporter-kube-controller-manager 1h
kube-prometheus-exporter-kube-etcd 1h
kube-prometheus-exporter-kube-scheduler 1h
kube-prometheus-exporter-kube-state 1h
kube-prometheus-exporter-kubelets 1h
kube-prometheus-exporter-kubernetes 1h
kube-prometheus-exporter-node 1h
kube-prometheus-rules 1h
prometheus-rule-mysql 8s

等待1分钟左右,在prometheus图形界面中可以找到刚才添加的mysql.rule的内容了

如何动态更新Alertmanager配置

原理

Operator部署Alertmanager的时候会生成一个statefulset类型对象,通过命令kubectl get statefulset –all-namespaces可以找到这个statefulset,可以看到name是alertmanager-kube-prometheus

1
2
3
4
5
6
7
8
9
10
11
12
13
root@k8s1:~/prometheus-operator/helm/alertmanager/templates# kubectl get statefulset --all-namespaces
NAMESPACE NAME DESIRED CURRENT AGE
elk elastic-release-elasticsearch-data 2 2 8d
elk elastic-release-elasticsearch-master 3 3 8d
elk logstash-release 1 1 9d
kafka kafka-release 3 3 1d
kafka zookeeper-release 3 3 12d
kube-system mongodb-release-arbiter 1 1 16d
kube-system mongodb-release-primary 1 1 16d
kube-system mongodb-release-secondary 1 1 16d
kube-system my-release-mysqlha 3 3 15d
monitoring alertmanager-kube-prometheus 1 1 9h
monitoring prometheus-kube-prometheus 1 1 9h

然后执行命令kubectl describe statefulset alertmanager-kube-prometheus -n monitoring可以看到该statefulset的详细信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
root@k8s1:~/prometheus-operator/helm/alertmanager/templates# kubectl describe statefulset alertmanager-kube-prometheus -n monitoring
Name: alertmanager-kube-prometheus
Namespace: monitoring
CreationTimestamp: Wed, 05 Sep 2018 09:46:08 +0800
Selector: alertmanager=kube-prometheus,app=alertmanager
Labels: alertmanager=kube-prometheus
app=alertmanager
chart=alertmanager-0.1.6
heritage=Tiller
release=kube-prometheus
Annotations: <none>
Replicas: 1 desired | 1 total
Update Strategy: RollingUpdate
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: alertmanager=kube-prometheus
app=alertmanager
Containers:
alertmanager:
Image: intellif.io/prometheus-operator/alertmanager:v0.15.1
Ports: 9093/TCP, 6783/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--config.file=/etc/alertmanager/config/alertmanager.yaml
--cluster.listen-address=$(POD_IP):6783
--storage.path=/alertmanager
--web.listen-address=:9093
--web.external-url=http://192.168.11.178:30903
--web.route-prefix=/
--cluster.peer=alertmanager-kube-prometheus-0.alertmanager-operated.monitoring.svc:6783
Requests:
memory: 200Mi
Liveness: http-get http://:web/api/v1/status delay=0s timeout=3s period=10s #success=1 #failure=10
Readiness: http-get http://:web/api/v1/status delay=3s timeout=3s period=5s #success=1 #failure=10
Environment:
POD_IP: (v1:status.podIP)
Mounts:
/alertmanager from alertmanager-kube-prometheus-db (rw)
/etc/alertmanager/config from config-volume (rw) #挂载的secert目录
config-reloader:
Image: intellif.io/prometheus-operator/configmap-reload:v0.0.1
Port: <none>
Host Port: <none>
Args:
-webhook-url=http://localhost:9093/-/reload
-volume-dir=/etc/alertmanager/config
Limits:
cpu: 5m
memory: 10Mi
Environment: <none>
Mounts:
/etc/alertmanager/config from config-volume (ro)
Volumes:
config-volume:
Type: Secret (a volume populated by a Secret)
SecretName: alertmanager-kube-prometheus
Optional: false
alertmanager-kube-prometheus-db:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
Volume Claims: <none>
Events: <none>

该statefulset挂载了一个名为alertmanager-kube-prometheus的secret资源到alertmanager容器内部的/etc/alertmanager/config/alertmanager.yaml,上面的Volumes:下面的config-volume:标签下可以看到,Type:字段的值为Secret表示挂载一个secret资源,secrect的name是alertmanager-kube-prometheus,通过一下命令查看该secret: kubectl describe secrets alertmanager-kube-prometheus -n monitoring

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@k8s1:~# kubectl describe secrets alertmanager-kube-prometheus -n monitoring
Name: alertmanager-kube-prometheus
Namespace: monitoring
Labels: alertmanager=kube-prometheus
app=alertmanager
chart=alertmanager-0.1.6
heritage=Tiller
release=kube-prometheus
Annotations: <none>

Type: Opaque

Data
====
alertmanager.yaml: 567 bytes

可以看到该secert的Data项里面有一个key为alertmanager.yaml的属性,其value包含567bytes,而这个alertmanager.yaml的值其实就是alertmanager容器的/etc/alertmanager/config/alertmanager.yaml中的内容,statefulset通过挂载的方式将/etc/alertmanager/config挂载成一个secret,执行kubectl edit secrets -n monitoring alertmanager-kube-prometheus可以看到该secret的内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0KICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICoqKioKICBzbXRwX2F1dGhfdXNlcm5hbWU6IGlub3JpX3lpbmprQDE2My5jb20KICBzbXRwX2Zyb206IGlub3JpX3lpbmprMUAxNjMuY29tCiAgc210cF9yZXF1aXJlX3RsczogZmFsc2UKICBzbXRwX3NtYXJ0aG9zdDogc210cC4xNjMuY29tOjI1CnJlY2VpdmVyczoKLSBlbWFpbF9jb25maWdzOgogIC0gaGVhZGVyczoKICAgICAgU3ViamVjdDogJ1tFUlJPUl0gcHJvbWV0aGV1cy4uLi4uLi4uLi4uLicKICAgIHRvOiB4eHh4QHFxLmNvbQogIG5hbWU6IHRlYW0tWC1tYWlscwotIG5hbWU6ICJudWxsIgpyb3V0ZToKICBncm91cF9ieToKICAtIGFsZXJ0bmFtZQogIC0gY2x1c3RlcgogIC0gc2VydmljZQogIGdyb3VwX2ludGVydmFsOiA1bQogIGdyb3VwX3dhaXQ6IDYwcwogIHJlY2VpdmVyOiB0ZWFtLVgtbWFpbHMKICByZXBlYXRfaW50ZXJ2YWw6IDI0aAogIHJvdXRlczoKICAtIG1hdGNoOgogICAgICBhbGVydG5hbWU6IERlYWRNYW5zU3dpdGNoCiAgICByZWNlaXZlcjogIm51bGwi
kind: Secret
metadata:
creationTimestamp: 2018-09-05T01:46:08Z
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.6
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "5820063"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 75a589e8-b0ad-11e8-8746-005056bf1d6e
type: Opaque

其中data:下面的alertmanager.yaml这个key对应的值是一串base64编码过后的字符串,将这段字符串复制出来通过base64反编码之后内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
global:
resolve_timeout: 5m
smtp_auth_password: xxxxxx
smtp_auth_username: inori_yinjk@163.com
smtp_from: inori_yinjk@163.com
smtp_require_tls: false
smtp_smarthost: smtp.163.com:25
receivers:
- email_configs:
- headers:
Subject: '[ERROR] prometheus............'
to: 1121562648@qq.com
name: team-X-mails
- name: "null"
route:
group_by:
- alertname
- cluster
- service
group_interval: 5m
group_wait: 60s
receiver: team-X-mails
repeat_interval: 24h
routes:
- match:
alertname: DeadMansSwitch
receiver: "null"

这其实就是alertmanager的config配置,上面说到,该内容会被挂载到alertmanager容器的/etc/alertmanager/config/alertmanager.yaml,我们进入alertmanager容器去看看该文件,执行命令kubectl exec -it alertmanager-kube-prometheus-0 -n monitoring sh进入到容器(可能你的容器名和我的不同,可以通过kubectl get pods –all-namespaces命令查看所有的容器),然后进入目录/etc/alertmanager/config,然后ls可以看到该目录下有一个叫alertmanager.yaml的文件,而该文件的内容就是上面base64反编译之后的内容,我们通过修改名为alertmanager-kube-prometheus的secret的data属性中的alertmanager.yaml字段对应的值就相当于修改了该文件中的内容,所以现在问题就变简单了,在alertmanager的pod中还有另一个container叫做config-reloader,它会监听/etc/alertmanager/config目录,当该目录下的文件发生变化的时候,config-reloader会向alertmanager发起http://localhost:9093/-/reloadPOST请求,alertmanager会重新加载该目录下的配置文件,从而实现了动态配置更新

如何操作

在理解了alertmanager动态配置的原理之后,问题就很清晰了,我们需要动态配置alertmanager只需要更新名为alertmanager-kube-prometheus(你的secret名不一定为这个名字,但一定是alertmanager-{*}格式)的secret的data属性中的alertmanager.yaml字段的值就可以了,更新secret有两种方法,一是通过kubectl edit secret的方式,一种是通过kubectl patch secret的方式,但是两种方式更新secret都需要输入base64编码之后的字符串,这里通过linux下的base64命令进行编码:

首先修改上面base64反编译后的文件,比如smtp_from改成另一个邮箱发送,修改完成之后保存文件,然后通过命令base64 file > test.txt的方式将配置通过base64编码并将编码结果输出到test.txt文件中,然后进入test.txt文件中复制编码之后的字符串,如果通过第一种方式更新secret,执行命令

kubectl edit secrets -n monitoring alertmanager-kube-prometheus

然后data下面的alertmanager.yaml的值为刚才复制的字符串,保存并退出就可以了。如果通过第二种方式更新secert,执行命令

kubectl patch secert alertmanager-kube-prometheus -n -n monitoring -p '{"data":{"alertmanager.yaml":"此处填写刚才复制的base64编码之后的配置字符串"}'

即可完成更新,该命令中 -p参数后面跟的是一个JSON字符串,将刚才复制的base64编码后的字符串填入正确的位置可以了

在完成更新之后可以访问alertmanager的界面http://192.168.11.178:30903/#/status,查看配置已经生效了

通过上面的操作我们已经实现了监控对象的动态发现,监控告警规则的动态添加,告警配置(发送邮件)的动态配置,基本上已经实现了所有配置的动态配置

Prometheus-Operator配置

配置Prometheus一般情况下只需要配置kube-prometheus下的values.yaml就能实现对alertmanager、promethes的配置,该文件该如何配置在配置项上一般都有说明,这里主要讲解上文提到的几个配置以及其他比较常用的几个配置,避免占用太多篇幅,我已经将比较简单或不常用的配置以省略号代替:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
...

alertmanager: #所有alertmanager的配置都在这个标签下面
config: #alermanager的config,和传统的alertmanager的config配置相同
global:
resolve_timeout: 5m
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'null'
routes:
- match:
alertname: DeadMansSwitch
receiver: 'null'
receivers:
- name: 'null'

## 外部url,报警发送邮件后可以通过该Url访问Alertmanager的界面
externalUrl: "http://192.168.11.178:30903"

...
## Node labels for Alertmanager pod assignment 通过该选择器将会选择alertmanager在哪个node上面部署
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

...

## List of Secrets in the same namespace as the AlertManager
## object, which shall be mounted into the AlertManager Pods.
## Ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerspec
##
secrets: []

service:
...
## Port to expose on each node
## Only used if service.type is 'NodePort'
##
nodePort: 30903 #暴露的端口

## Service type
##
type: NodePort # 以NodePort的方式部署alertmanager 的Service,这将可以使用node的ip访问服务

prometheus: #所有的prometheus的配置都在这里编写
## Alertmanagers to which alerts will be sent
## Ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerendpoints
##
alertingEndpoints: []
#alertmanager地址,不填写默认会使用同时部署的alertmanager,在helm/prometheus/templates/prometheus.yaml文件第40行,
#github中https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L40
# - name: ""
# namespace: ""
# port: 9093
# scheme: http

...

## External URL at which Prometheus will be reachable
##同上,外部访问地址
externalUrl: ""

## List of Secrets in the same namespace as the Prometheus
## object, which shall be mounted into the Prometheus Pods.
## Ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
secrets: []

## How long to retain metrics
## prometheus时序数据库数据保存时间
retention: 24h

## Namespaces to be selected for PrometheusRules discovery.
## If unspecified, only the same namespace as the Prometheus object is in is used.
## 选择PrometheusRule的namespace,如何配置any:true则回去所有命名空间中去寻找PrometheusRule资源
ruleNamespaceSelector: {}
## any: true
## or
##

## Rules PrometheusRule CRD selector
## Ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/design.md
##
## 1. If `matchLabels` is used, `rules.additionalLabels` must contain all the labels from
## `matchLabels` in order to be be matched by Prometheus
## 2. If `matchExpressions` is used `rules.additionalLabels` must contain at least one label
## from `matchExpressions` in order to be matched by Prometheus
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
## 上文提到的ruleNamespaceSelector,不填写默认会使用prometheus: {{.Values.prometheusLabelValue}},
## 而prometheusLabel定义在helm/prometheus/values.yaml中,默认为.Release.Name即releaseName(kube-prometheus)
## 我们可以通过修改helm/prometheus/values.yaml中的prometheusLabel值来修改默认选择的标签,也可以在这里定义自己的标签,
## 在这里定义了标签之后默认的会失效,比如在这里定义一个comment: prometheus标签,默认的prometheus: kube-prometheus就会失效,prometheus也将根据新定义的标签来选择PrometheusRule资源
rulesSelector: {}
# rulesSelector: {
# matchExpressions: [{key: prometheus, operator: In, values: [example-rules, example-rules-2]}]
# }
### OR
# rulesSelector: {
# matchLabels: [{role: example-rules}]
# }

## Prometheus alerting & recording rules
## Ref: https://prometheus.io/docs/querying/rules/
## Ref: https://prometheus.io/docs/alerting/rules/
##
rules: #可以在这里配置PrometheusRule资源,一般不在这里配置,而是单独编写PrometheusRule这样可以实现动态配置
specifiedInValues: true
## What additional rules to be added to the PrometheusRule CRD
## You can use this together with `rulesSelector`
additionalLabels: {}
# prometheus: example-rules
# application: etcd
value: {}

service:

## Port to expose on each node
## Only used if service.type is 'NodePort'
##
nodePort: 30900 #同上,暴露的端口

## Service type
##
type: NodePort #同上,将Prometheus Service映射到外网

## Service monitors selector
## Ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/design.md
## 同上,prometheus默认会选择带有prometheus: kube-prometheus标签的ServiceMonitor资源,也可以在这里配置自定义的标签,一旦在这里配置了标签,默认的将会失效,prometheus会按照新的serviceMonitorSelector中定义的标签来选择对应的ServiceMonitor
serviceMonitorsSelector: {}
# matchLabels:
# - comment: prometheus
# - release: kube-prometheus

## ServiceMonitor CRDs to create & be scraped by the Prometheus instance.
## Ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/service-monitor.md
##
serviceMonitors: [] #可以在这里配置serviceMonitor资源,一般不再这里配置,参见如何编写ServiceMonitor章节
--------------------本文结束,感谢您的阅读--------------------