A simple in-memory metric store
Go to file
2023-08-22 15:33:19 +02:00
.github/workflows Add Release action with RPMs and DEB packages 2022-12-13 15:23:49 +01:00
scripts Add Release action with RPMs and DEB packages 2022-12-13 14:57:43 +01:00
testdata/checkpoints/fritz/f0131 Import unit test 2023-08-22 15:27:52 +02:00
.gitignore Offset first write timestamp by halve the interval (#9) 2022-04-07 11:09:01 +02:00
api.go Add unit to buffers 2023-08-21 15:47:16 +02:00
archive_test.go Update archive_test.go 2023-08-22 15:33:19 +02:00
archive.go Add unit to buffers 2023-08-21 15:47:16 +02:00
cc-metric-store.go More debug options 2022-04-01 14:01:43 +02:00
config.json Remove 'scope' field from config 2022-02-21 10:00:29 +01:00
debug.go Add unit to buffers 2023-08-21 15:47:16 +02:00
float.go Cleanup; /api/debug returns JSON 2022-03-31 14:17:27 +02:00
go.mod Fix panic in ListChildren 2022-03-04 15:33:47 +01:00
go.sum Remove 'scope' field from config 2022-02-21 10:00:29 +01:00
LICENSE Initial commit 2021-06-08 18:15:24 +02:00
lineprotocol_test.go Add decodeLine benchmark 2022-03-08 09:31:16 +01:00
lineprotocol.go Accept unit tags for incoming metrics and store it in the buffer. 2023-04-14 18:24:22 +02:00
Makefile Minor fix to Makefile 2023-04-14 18:25:20 +02:00
memoryStore_test.go.orig New unfinished MemoryStore implementation 2021-08-31 10:52:22 +02:00
memstore_test.go Accept unit tags for incoming metrics and store it in the buffer. 2023-04-14 18:24:22 +02:00
memstore.go Accept unit tags for incoming metrics and store it in the buffer. 2023-04-14 18:24:22 +02:00
openapi.yaml Make type-ids strings 2022-05-04 09:18:56 +02:00
README.md Allow multiple nats subscriptions 2022-02-22 14:18:44 +01:00
selector.go Cleanup; /api/debug returns JSON 2022-03-31 14:17:27 +02:00
stats.go Refactor and simplify 2022-03-08 09:27:44 +01:00
TODO.md Add OpenAPI spec 2022-01-20 10:43:10 +01:00

ClusterCockpit Metric Store

Build & Test

The cc-metric-store provides a simple in-memory time series database for storing metrics of cluster nodes at preconfigured intervals. It is meant to be used as part of the ClusterCockpit suite. As all data is kept in-memory (but written to disk as compressed JSON for long term storage), accessing it is very fast. It also provides aggregations over time and nodes/sockets/cpus.

There are major limitations: Data only gets written to disk at periodic checkpoints, not as soon as it is received.

Go look at the TODO.md file and the GitHub Issues for a progress overview. Things work, but are not properly tested. The NATS.io based writing endpoint consumes messages in this format of the InfluxDB line protocol.

REST API Endpoints

The REST API is documented in openapi.yaml in the OpenAPI 3.0 format.

Run tests

Some benchmarks concurrently access the MemoryStore, so enabling the Race Detector might be useful. The benchmarks also work as tests as they do check if the returned values are as expected.

# Tests only
go test -v ./...

# Benchmarks as well
go test -bench=. -race -v ./...

What are these selectors mentioned in the code?

Tags in InfluxDB are used to build indexes over the stored data. InfluxDB-Tags have no relation to each other, they do not depend on each other and have no hierarchy. Different tags build up different indexes (I am no expert at all, but this is how i think they work).

This project also works as a time-series database and uses the InfluxDB line protocol. Unlike InfluxDB, the data is indexed by one single strictly hierarchical tree structure. A selector is build out of the tags in the InfluxDB line protocol, and can be used to select a node (not in the sense of a compute node, can also be a socket, cpu, ...) in that tree. The implementation calls those nodes level to avoid confusion. It is impossible to access data only by knowing the socket or cpu tag, all higher up levels have to be specified as well.

This is what the hierarchy currently looks like:

  • cluster1
    • host1
      • socket0
      • socket1
      • ...
      • cpu1
      • cpu2
      • cpu3
      • cpu4
      • ...
    • host2
    • ...
  • cluster2
  • ...

Example selectors:

  1. ["cluster1", "host1", "cpu0"]: Select only the cpu0 of host1 in cluster1
  2. ["cluster1", "host1", ["cpu4", "cpu5", "cpu6", "cpu7"]]: Select only CPUs 4-7 of host1 in cluster1
  3. ["cluster1", "host1"]: Select the complete node. If querying for a CPU-specific metric such as floats, all CPUs are implied

Config file

All durations are specified as string that will be parsed like this (Allowed suffixes: s, m, h, ...).

  • metrics: Map of metric-name to objects with the following properties
    • frequency: Timestep/Interval/Resolution of this metric
    • aggregation: Can be "sum", "avg" or null
      • null means aggregation across nodes is forbidden for this metric
      • "sum" means that values from the child levels are summed up for the parent level
      • "avg" means that values from the child levels are averaged for the parent level
    • scope: Unused at the moment, should be something like "node", "socket" or "hwthread"
  • nats:
    • address: Url of NATS.io server, example: "nats://localhost:4222"
    • username and password: Optional, if provided use those for the connection
    • subscriptions:
      • subscribe-to: Where to expect the measurements to be published
      • cluster-tag: Default value for the cluster tag
  • http-api:
    • address: Address to bind to, for example 0.0.0.0:8080
    • https-cert-file and https-key-file: Optional, if provided enable HTTPS using those files as certificate/key
  • jwt-public-key: Base64 encoded string, use this to verify requests to the HTTP API
  • retention-on-memory: Keep all values in memory for at least that amount of time
  • checkpoints:
    • interval: Do checkpoints every X seconds/minutes/hours
    • directory: Path to a directory
    • restore: After a restart, load the last X seconds/minutes/hours of data back into memory
  • archive:
    • interval: Move and compress all checkpoints not needed anymore every X seconds/minutes/hours
    • directory: Path to a directory

Test the complete setup (excluding ClusterCockpit itself)

There are two ways for sending data to the cc-metric-store, both of which are supported by the cc-metric-collector. This example uses Nats, the alternative is to use HTTP.

# Only needed once, downloads the docker image
docker pull nats:latest

# Start the NATS server
docker run -p 4222:4222 -ti nats:latest

Second, build and start the cc-metric-collector using the following as Sink-Config:

{
  "type": "nats",
  "host": "localhost",
  "port": "4222",
  "database": "updates"
}

Third, build and start the metric store. For this example here, the config.json file already in the repository should work just fine.

# Assuming you have a clone of this repo in ./cc-metric-store:
cd cc-metric-store
go get
go build
./cc-metric-store

And finally, use the API to fetch some data. The API is protected by JWT based authentication if jwt-public-key is set in config.json. You can use this JWT for testing: eyJ0eXAiOiJKV1QiLCJhbGciOiJFZERTQSJ9.eyJ1c2VyIjoiYWRtaW4iLCJyb2xlcyI6WyJST0xFX0FETUlOIiwiUk9MRV9BTkFMWVNUIiwiUk9MRV9VU0VSIl19.d-3_3FZTsadPjDEdsWrrQ7nS0edMAR4zjl-eK7rJU3HziNBfI9PDHDIpJVHTNN5E5SlLGLFXctWyKAkwhXL-Dw

JWT="eyJ0eXAiOiJKV1QiLCJhbGciOiJFZERTQSJ9.eyJ1c2VyIjoiYWRtaW4iLCJyb2xlcyI6WyJST0xFX0FETUlOIiwiUk9MRV9BTkFMWVNUIiwiUk9MRV9VU0VSIl19.d-3_3FZTsadPjDEdsWrrQ7nS0edMAR4zjl-eK7rJU3HziNBfI9PDHDIpJVHTNN5E5SlLGLFXctWyKAkwhXL-Dw"

# If the collector and store and nats-server have been running for at least 60 seconds on the same host, you may run:
curl -H "Authorization: Bearer $JWT" -D - "http://localhost:8080/api/query" -d "{ \"cluster\": \"testcluster\", \"from\": $(expr $(date +%s) - 60), \"to\": $(date +%s), \"queries\": [{
  \"metric\": \"load_one\",
  \"host\": \"$(hostname)\"
}] }"

# ...