A node agent for measuring, processing and forwarding node level metrics
Go to file
2024-10-08 13:36:46 +02:00
.github Fix job dependency in Release.yml 2023-12-04 12:26:57 +01:00
collectors Update likwidMetric.md 2024-10-08 13:36:46 +02:00
docs Add workflow to customize packages to docs 2022-12-14 16:50:49 +01:00
internal Merge develop into main (#109) 2023-12-04 12:21:26 +01:00
pkg Merge develop into main (#109) 2023-12-04 12:21:26 +01:00
receivers Merge develop into main (#109) 2023-12-04 12:21:26 +01:00
scripts Merge develop into main (#109) 2023-12-04 12:21:26 +01:00
sinks Update README.md for sinks 2024-07-15 12:38:34 +02:00
.gitignore Initial commit 2021-02-16 16:24:11 +01:00
.gitmodules Ganglia sink using libganglia.so directly (#35) 2022-02-16 18:33:46 +01:00
.zenodo.json Register cc-metric-collector at Zenodo (#93) 2022-12-14 16:53:44 +01:00
cc-metric-collector.go move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88) 2022-10-10 11:53:11 +02:00
collectors.json Add memstats and topprocs metric 2022-06-23 11:44:06 +02:00
config.json Use Golang duration parser for 'interval' and 'duration' 2022-05-13 12:33:33 +02:00
go.mod Merge develop into main (#109) 2023-12-04 12:21:26 +01:00
go.sum Merge develop into main (#109) 2023-12-04 12:21:26 +01:00
LICENSE Initial commit 2021-02-16 16:24:11 +01:00
Makefile Debian does not like underscores in the version 2022-12-20 13:35:21 +01:00
README.md Update README.md 2022-12-14 18:47:32 +01:00
receivers.json Use package hostlist to expand a host list 2022-12-01 09:48:34 +01:00
router.json Modularize the whole thing (#16) 2022-01-25 15:37:43 +01:00
sinks.json Adopt sinks.json for new meta_as_tags usage 2022-04-19 12:06:53 +02:00

cc-metric-collector

A node agent for measuring, processing and forwarding node level metrics. It is part of the ClusterCockpit ecosystem.

The metric collector sends (and receives) metric in the InfluxDB line protocol as it provides flexibility while providing a separation between tags (like index columns in relational databases) and fields (like data columns).

There is a single timer loop that triggers all collectors serially, collects the collectors' data and sends the metrics to the sink. This is done as all data is submitted with a single time stamp. The sinks currently use mostly blocking APIs.

The receiver runs as a go routine side-by-side with the timer loop and asynchronously forwards received metrics to the sink.

DOI

Configuration

Configuration is implemented using a single json document that is distributed over network and may be persisted as file. Supported metrics are documented here.

There is a main configuration file with basic settings that point to the other configuration files for the different components.

{
  "sinks": "sinks.json",
  "collectors" : "collectors.json",
  "receivers" : "receivers.json",
  "router" : "router.json",
  "interval": "10s",
  "duration": "1s"
}

The interval defines how often the metrics should be read and send to the sink. The duration tells collectors how long one measurement has to take. This is important for some collectors, like the likwid collector. For more information, see here.

See the component READMEs for their configuration:

Installation

$ git clone git@github.com:ClusterCockpit/cc-metric-collector.git
$ make (downloads LIKWID, builds it as static library with 'direct' accessmode and copies all required files for the collector)
$ go get (requires at least golang 1.16)
$ make

For more information, see here.

Running

$ ./cc-metric-collector --help
Usage of metric-collector:
  -config string
    	Path to configuration file (default "./config.json")
  -log string
    	Path for logfile (default "stderr")
  -once
    	Run all collectors only once

Scenarios

The metric collector was designed with flexibility in mind, so it can be used in many scenarios. Here are a few:

flowchart TD
  subgraph a ["Cluster A"]
  nodeA[NodeA with CC collector]
  nodeB[NodeB with CC collector]
  nodeC[NodeC with CC collector]
  end
  a --> db[(Database)]
  db <--> ccweb("Webfrontend")
flowchart TD
  subgraph a [ClusterA]
  direction LR
  nodeA[NodeA with CC collector]
  nodeB[NodeB with CC collector]
  nodeC[NodeC with CC collector]
  end
  subgraph b [ClusterB]
  direction LR
  nodeD[NodeD with CC collector]
  nodeE[NodeE with CC collector]
  nodeF[NodeF with CC collector]
  end
  a --> ccrecv{"CC collector as receiver"}
  b --> ccrecv
  ccrecv --> db[("Database1")]
  ccrecv -.-> db2[("Database2")]
  db <-.-> ccweb("Webfrontend")

Contributing

The ClusterCockpit ecosystem is designed to be used by different HPC computing centers. Since configurations and setups differ between the centers, the centers likely have to put some work into the cc-metric-collector to gather all desired metrics.

You are free to open an issue to request a collector but we would also be happy about PRs.

Contact