This adds a service discovery configuration for promtail to also
collect logs for Gerrit installations in Kubernetes. The installations
will be discovered by namespace and a given label.
Change-Id: I894e47f37428add9b44df6596950d314ee2a3ed0
This adds the promtail chart to the installation that allows to
collect the logs of the applications in the cluster, which are written
to stdout of the containers.
This will only collect logs from pods in the same namespace as the
monitoring setup. In a later change also logs from Gerrit instances
in Kubernetes will be added.
Change-Id: I86c5c5470eaa31191fb5ac339ee21dee85106097
So far it was only possible to monitor single instance Gerrit servers.
This was due to to the fact that a URL had to be used that pointed to
a dedicated instance, since if multiple replicas would be behind the
instance, the metrics of a random replica would be scraped and not of
all.
Prometheus has a service discovery functionality for deployments running
in Kubernetes. This is now used, when monitoring a Gerrit instance in
Kubernetes. This allows to have a variable number of replicas running,
which will be automatically discovered by Prometheus.
The dashboards were adapted accordingly and allow now to select the
replica to be observed. For now, no summary of all replicas can be
displayed in the dashboards, but that feature is planned to be added
in the future.
Change-Id: I96efc63a192cd90f5e3e91a53dace8e1ae83132e
This replaces the hacky graph showing the Gerrit version with a table
showing the current Gerrit version information.
Change-Id: Idfbdc85e376953aead40fea06544e5c84fb777e7
Add graphs for the following latency metrics
- receive-commit
- query total
- query changes
- REST total
- REST change list comments
- REST change list robot comments
- REST change post review
- REST get change detail
- REST get change diff
- REST get change
- REST get commit
- REST get change revision actions
Change-Id: Id782e12335ae76820cac4e4e8c80450671bf8216
The installation failed, if TLS verification was disabled and no CA
certificate was given in the configuration. This happened because the
installation script always expected the CA certificate.
The installation now only expects the certificate, if TLS verification
is enabled.
Change-Id: I5429fc1ee0d230c74cc0689607cf2736d6520030
This adds the promtail version used in the setup to a file and adds
an installation step downloading promtail, if the installation is not
run in `dryrun`-mode.
Change-Id: I1127220a57b2610b5c4458ce2205077706a860e6
So far the install-script could only create a single promtail config.
Since the monitoring setup is able to monitor multiple Gerrit servers,
this caused manual work to create a promtail config per Gerrit server.
Now ytt will create a configuration for each Gerrit host configured
in the config.yaml. Ytt is only able to do that in a single file. Thus,
csplit is used to split the files into separate files that can then
be used to configure promtail on the respective hosts. The config-
files can then be found under
$OUTPUT/promtail/promtail-$GERRIT_HOSTNAME.yaml.
Change-Id: Ib09fba83d8a8fbd45b42e9e5388a85a37ab1a952
The scripts were written in bash. Using bash became quite unwieldy.
Python by nature can deal well with yaml and is thus better suited
in dealing with the yaml-based configuration files. This change
rewrites the original scripts staying as close as possible to the
original ones.
Right now, the python scripts call subprocesses a lot to work with
the tools, which were already used before. At least for yaml-
templating there may be better tools that have a python integration,
which could be used in the future.
Change-Id: Ida16318445a05dcfdada9c7a56a391e4827f02e7
* changes:
Relabel the instance label for prometheus and loki metrics
Add dashboard for Loki metrics
Add dashboard to monitor Prometheus data
Only show Gerrit instances in the instance dropdowns
Create a configmap per dashboard
The instance label for Prometheus had the value localhost:9090, which
was misleading.
Now the label is relabeled to prometheus-<namespace> or loki-<namespace>.
This is still not ideal for cases, where multiple replicas are deployed,
but until then, it is already a slight improvement.
Change-Id: I1efdc49071b1d3bf99d21315ca03821e9d58c906
A variable was used to select the Gerrit instance to observe in the
dashboards. Since the instance label is set for all targets that
prometheus scrapes, the variable would also contain e.g. the prometheus
instance.
Now only Gerrit instances are displayed by further filtering for a
metric specific for Gerrit.
Change-Id: I392b2ddf53a0ea49db25018dc5d37d269365812a
I the dashboard files got too large (>2Mb) Kubernetes was rejecting
the configmap.
Now each dashboard is installed with an own configmap. A sidecar container
is used to register these dashboards with Grafana.
Change-Id: I84062d6e2ac7dc2669945b54575bf239a25900a4
The default maximum log lines shown in Grafana are 1000. This is
barely covering a few minutes in the httpd-logs.
The value of 10,000 can still be handled by the browser. More log
entries will cause the browser to cache as long as Grafana does not
provide pagination, which is planned for the future.
Change-Id: Ife84d161cd022300ff6f440920021e4176b770b9
The most interesting new features are:
- proper limits for queried logs
- query history for logs (still a beta feature)
Change-Id: Ibd8b76b0e1e16d4bd3c74382fa3fd5a24c1bba45
The chunks created by Loki were stored in a persistent volume. This
does not scale well, since volumes cannot easily be resized in
Kubernetes. Also, at least the ext4-filesystem had issues, when large
numbers of logs were saved. These issues are due to the dir_index as
discussed in [1].
An object store provides a more scalable and cheaper solution. Loki
supports S3 as an object storage and also other object stores that
understand the S3 API like Ceph or OpenStack Swift.
[1] https://github.com/grafana/loki/issues/1502
Change-Id: Id55095c3b6659f40708712c1a494753dbcab7686
Promtail was configured to create labels for nearly every key in the
logs. This was done to support easier label-based querying. Loki
however is not optimized to work with labels having a high cardinality.
This led to failures in Loki, if it had to handle a high number of
logs. In addition, the high number of labels led to a huge number of
chunks being created, mostly just containing a single log entry,
making querying and storage very inefficient.
This change removes all custom made labels, except for the
gerrit_version label. Logs should rather be queried using the grep-
like syntax of LogQL for which Loki is optimized.
Change-Id: I70e2a3ff4f640bc6f5d08d50212958a7bca2eae1
This increases the time a chunk has to be filled before being flushed.
With shorter times, it could happen that during times of low traffic
chunks will not be filled completely before being flushed. This would
lead to small chunk objects, which is inefficient.
Change-Id: I74b2af1a053c8d4298b9e9d7ffca04cb9d8926bd
So far, there were no limits to the resources the Loki pod was allowed
to use. This now sets limits that in my observation for now seem to
work. With handling more and more logs, these limits will probably have
to be increased.
Change-Id: I7313488a60da8a1fff28666870549f748400735a
The default limit of requests accepted by Loki from a single host was
set to 10000, which is not enough for a large Gerrit instance to push
all httpd/sshd-logs to Loki.
Change-Id: I94cb56e00102170ae4ed10e90123a8885e3aad00
- Rearrange the other panels so that we show system load over cpu usage
over threads in the left column.
- Reduce height of memory panel a bit
Change-Id: Icaada525f87d0df503f67cf688b94d15a4119034
This change adds the current status of a project that aims to create
a simple monitoring setup to monitor Gerrit servers, which was developed
internally at SAP.
The project provides an opinionated and basic configuration for helm
charts that can be used to install Loki, Prometheus and Grafana on a
Kubernetes cluster. Scripts to easily apply the configuration and
install the whole setup are provided as well.
The contributions so far were done by (with number of commits)
80 Thomas Draebing
11 Matthias Sohn
2 Saša Živkov
Change-Id: I8045780446edfb3c0dc8287b8f494505e338e066