A node agent for measuring, processing and forwarding node level metrics
Go to file
2022-11-23 10:37:31 +01:00
.github Build DEB package for Ubuntu 20.04 for releases 2022-09-28 12:19:36 +02:00
collectors Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30) 2022-11-16 14:58:11 +01:00
docs Mark code parts as bash 2022-07-28 16:25:32 +02:00
internal move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88) 2022-10-10 11:53:11 +02:00
pkg move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88) 2022-10-10 11:53:11 +02:00
receivers remove prefix enumeration like 01-... 2022-11-22 17:02:29 +01:00
scripts Add rules files for DEB package 2022-07-27 18:08:15 +02:00
sinks Update README.md 2022-11-04 14:52:09 +01:00
.gitignore Initial commit 2021-02-16 16:24:11 +01:00
.gitmodules Ganglia sink using libganglia.so directly (#35) 2022-02-16 18:33:46 +01:00
cc-metric-collector.go move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88) 2022-10-10 11:53:11 +02:00
collectors.json Add memstats and topprocs metric 2022-06-23 11:44:06 +02:00
config.json Use Golang duration parser for 'interval' and 'duration' 2022-05-13 12:33:33 +02:00
go.mod Update to latest version of included go modules 2022-11-22 09:42:04 +01:00
go.sum move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88) 2022-10-10 11:53:11 +02:00
LICENSE Initial commit 2021-02-16 16:24:11 +01:00
Makefile Add go.mod to App dependency 2022-11-22 09:45:29 +01:00
README.md Update link to cc-specifications repo with line protocol 2022-07-27 17:46:27 +02:00
receivers.json Add IPMI receiver example configuration to receivers.json 2022-11-23 10:37:31 +01:00
router.json Modularize the whole thing (#16) 2022-01-25 15:37:43 +01:00
sinks.json Adopt sinks.json for new meta_as_tags usage 2022-04-19 12:06:53 +02:00

cc-metric-collector

A node agent for measuring, processing and forwarding node level metrics. It is part of the ClusterCockpit ecosystem.

The metric collector sends (and receives) metric in the InfluxDB line protocol as it provides flexibility while providing a separation between tags (like index columns in relational databases) and fields (like data columns).

There is a single timer loop that triggers all collectors serially, collects the collectors' data and sends the metrics to the sink. This is done as all data is submitted with a single time stamp. The sinks currently use mostly blocking APIs.

The receiver runs as a go routine side-by-side with the timer loop and asynchronously forwards received metrics to the sink.

Configuration

Configuration is implemented using a single json document that is distributed over network and may be persisted as file. Supported metrics are documented here.

There is a main configuration file with basic settings that point to the other configuration files for the different components.

{
  "sinks": "sinks.json",
  "collectors" : "collectors.json",
  "receivers" : "receivers.json",
  "router" : "router.json",
  "interval": "10s",
  "duration": "1s"
}

The interval defines how often the metrics should be read and send to the sink. The duration tells collectors how long one measurement has to take. This is important for some collectors, like the likwid collector. For more information, see here.

See the component READMEs for their configuration:

Installation

$ git clone git@github.com:ClusterCockpit/cc-metric-collector.git
$ make (downloads LIKWID, builds it as static library with 'direct' accessmode and copies all required files for the collector)
$ go get (requires at least golang 1.16)
$ make

For more information, see here.

Running

$ ./cc-metric-collector --help
Usage of metric-collector:
  -config string
    	Path to configuration file (default "./config.json")
  -log string
    	Path for logfile (default "stderr")
  -once
    	Run all collectors only once

Scenarios

The metric collector was designed with flexibility in mind, so it can be used in many scenarios. Here are a few:

flowchart TD
  subgraph a ["Cluster A"]
  nodeA[NodeA with CC collector]
  nodeB[NodeB with CC collector]
  nodeC[NodeC with CC collector]
  end
  a --> db[(Database)]
  db <--> ccweb("Webfrontend")
flowchart TD
  subgraph a [ClusterA]
  direction LR
  nodeA[NodeA with CC collector]
  nodeB[NodeB with CC collector]
  nodeC[NodeC with CC collector]
  end
  subgraph b [ClusterB]
  direction LR
  nodeD[NodeD with CC collector]
  nodeE[NodeE with CC collector]
  nodeF[NodeF with CC collector]
  end
  a --> ccrecv{"CC collector as receiver"}
  b --> ccrecv
  ccrecv --> db[("Database1")]
  ccrecv -.-> db2[("Database2")]
  db <-.-> ccweb("Webfrontend")

Contributing

The ClusterCockpit ecosystem is designed to be used by different HPC computing centers. Since configurations and setups differ between the centers, the centers likely have to put some work into the cc-metric-collector to gather all desired metrics.

You are free to open an issue to request a collector but we would also be happy about PRs.

Contact