Compare commits

..

100 Commits

Author SHA1 Message Date
Holger Obermaier
ec8177e73b Fixed ineffectual assignment to err 2026-02-09 13:55:45 +01:00
Holger Obermaier
8d75aba882 Fixed Error return value of ... is not checked (errcheck) 2026-02-09 13:53:14 +01:00
Holger Obermaier
094fa52b20 Add schedstat configuration 2026-02-09 13:38:21 +01:00
Holger Obermaier
80d5adfb97 Fixed Error return value of ... is not checked (errcheck) 2026-02-09 13:26:52 +01:00
Holger Obermaier
4b831f09a5 Fixed Error return value of is not checked (errcheck) 2026-02-09 13:04:19 +01:00
Holger Obermaier
f6cd593862 Fix: There is no need to wait for command completion 2026-02-09 13:03:23 +01:00
Holger Obermaier
6000a1a45b Use SplitSeq and max to modernize code 2026-02-09 12:01:27 +01:00
Holger Obermaier
3633db0d46 Use module slices from the standard library. Remove use of golang.org/x/exp/slices 2026-02-09 11:30:32 +01:00
Holger Obermaier
77a9b5a977 Replace m[k]=v loop with maps.Copy 2026-02-06 14:50:00 +01:00
Holger Obermaier
baf7a4f2c5 Fixed Error return value of is not checked (errcheck) 2026-02-06 14:13:05 +01:00
Holger Obermaier
2b5bf4d6a5 Replaced stringArrayContains by slices.Contains 2026-02-06 13:49:39 +01:00
Holger Obermaier
1f7b13349c Fixed Error return value of ... is not checked (errcheck) 2026-02-06 13:19:50 +01:00
Holger Obermaier
f98800e039 Add diskstat config 2026-02-06 13:15:18 +01:00
Holger Obermaier
396a9f8ce5 Fixed Error return value of ... is not checked (errcheck) 2026-02-06 13:09:19 +01:00
Holger Obermaier
de5429201b Use slices to exclude metrics 2026-02-06 13:02:54 +01:00
Holger Obermaier
c8f4769a82 Add cpustat config 2026-02-06 13:00:10 +01:00
Holger Obermaier
090f6c69a9 Fix: There is no need to wait for command completion 2026-02-06 11:58:47 +01:00
Holger Obermaier
b2af81a038 Fixed Error return value of ... is not checked (errcheck) 2026-02-06 11:02:02 +01:00
Holger Obermaier
8f49c7aa67 Fixed Error return value of is not checked (errcheck) 2026-02-05 14:48:49 +01:00
Holger Obermaier
a77cc19ddb Fixed Error return value of ... is not checked (errcheck) 2026-02-05 10:06:34 +01:00
Holger Obermaier
ff0cd5803d Fixed Error return value of ... is not checked (errcheck) 2026-02-04 14:49:25 +01:00
Holger Obermaier
5df7b0eb48 Fix Error return value of is not checked (errcheck) 2026-02-04 14:39:44 +01:00
Holger Obermaier
0f7792b4cb Extended error handling 2026-02-04 14:27:07 +01:00
Holger Obermaier
e6dc9eba27 Fix ineffectual assignment to err (ineffassign) 2026-02-04 13:52:55 +01:00
Holger Obermaier
c9ebef3bad Fixed Error return value of is not checked (errcheck) 2026-02-04 12:31:40 +01:00
Holger Obermaier
faf5088385 Fix QF1003: could use tagged switch on ... (staticcheck) 2026-02-04 10:56:17 +01:00
Holger Obermaier
12ab80ccad Fix QF1005: could expand call to math.Pow (staticcheck) 2026-02-04 10:46:42 +01:00
Holger Obermaier
c24b3a7e4b Fix QF1004: could use strings.ReplaceAll instead (staticcheck) 2026-02-04 10:37:22 +01:00
Holger Obermaier
3c03f4ac96 Fix QF1004: could use strings.ReplaceAll instead (staticcheck) 2026-02-04 10:32:53 +01:00
Holger Obermaier
dddae13c7a Fix func intArrayContains is unused (unused) 2026-02-04 10:28:37 +01:00
Holger Obermaier
159bee1e9f Fix QF1011: could omit type ... from declaration; it will be inferred from the right-hand side (staticcheck) 2026-02-04 10:23:23 +01:00
Holger Obermaier
86710d9b4b Add golangci-lin as make target 2026-02-04 10:04:44 +01:00
Holger Obermaier
7cff283001 Update ci (#192)
Add static analysis with GolangCI-Lint, govet and staticcheck
2026-01-23 14:39:39 +01:00
Holger Obermaier
fa45d0d973 Update ci (#191)
* Add UBI 10 build
* Add Almalinux 10 build
* Use Appstream Repository from Red Hat Universal Base Image
* Use Appstream Repository from Almalinux
2026-01-21 15:20:12 +01:00
Holger Obermaier
e70fd658f0 Update CI pipeline (#189)
* Updated Action "checkout" and "Setup golang"
* Update go-toolset to latest version
* Add golang-race dependency
* Update download-artifact and upload-artifact
2026-01-15 14:35:43 +01:00
Holger Obermaier
c58790cd54 Switch to cc-lib v2 2026-01-15 11:30:50 +01:00
dependabot[bot]
67ee09ffef Bump golang.org/x/sys from 0.38.0 to 0.39.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.38.0 to 0.39.0.
- [Commits](https://github.com/golang/sys/compare/v0.38.0...v0.39.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-22 11:59:49 +01:00
dependabot[bot]
7f575269eb Bump github.com/ClusterCockpit/cc-lib from 0.11.0 to 1.0.2
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.11.0 to 1.0.2.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.11.0...v1.0.2)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 1.0.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-22 11:59:33 +01:00
dependabot[bot]
c8cd11796c Bump github.com/ClusterCockpit/cc-lib from 0.10.1 to 0.11.0
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.10.1 to 0.11.0.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.10.1...v0.11.0)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-08 12:37:40 +01:00
Roland Pabel
92f6c75d23 Gpfs collector restructure (#182)
* restructure metric similiar to lustre collector

* make sure total metric is not saved if none of the base metrics are present

* reuse variable

* corrections per ho-ob review
2025-12-01 14:52:18 +01:00
Thomas Roehl
62d40cfe00 Update workflow with latest golang RPM URLs 2025-11-26 08:36:12 +01:00
Thomas Roehl
ece1a52082 Move example configurations and update docs. Fixed #150 2025-11-26 08:24:23 +01:00
dependabot[bot]
398aa207a9 Bump github.com/tklauser/go-sysconf from 0.3.15 to 0.3.16
Bumps [github.com/tklauser/go-sysconf](https://github.com/tklauser/go-sysconf) from 0.3.15 to 0.3.16.
- [Release notes](https://github.com/tklauser/go-sysconf/releases)
- [Commits](https://github.com/tklauser/go-sysconf/compare/v0.3.15...v0.3.16)

---
updated-dependencies:
- dependency-name: github.com/tklauser/go-sysconf
  dependency-version: 0.3.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-26 08:22:54 +01:00
dependabot[bot]
b51bf592d0 Bump golang.org/x/sys from 0.37.0 to 0.38.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.37.0 to 0.38.0.
- [Commits](https://github.com/golang/sys/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.38.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-10 12:19:48 +01:00
Thomas Roehl
6243203880 Fix startup error of iostat collector 2025-10-20 17:06:10 +02:00
Thomas Roehl
c7c9f8c273 Fix max clock metrics 2025-10-20 17:05:59 +02:00
Roland Pabel
6a4ad067ac return new error 2025-10-20 16:29:36 +02:00
Roland Pabel
ed2378f794 StartsWith -> HasPrefix 2025-10-20 16:29:36 +02:00
Roland Pabel
99e066ff5f docu update for sudo 2025-10-20 16:29:36 +02:00
Roland Pabel
67cdbefb02 getting filename from error doesn't work, mmpmon path must be provided when using sudo 2025-10-20 16:29:36 +02:00
Roland Pabel
b522aca693 fix config.Mmpmon is the empty string because of the error thrown 2025-10-20 16:29:36 +02:00
Roland Pabel
ea7c4f4ec7 correctly check for EACCESS when searching for mmpmon with exec.LookPath 2025-10-20 16:29:36 +02:00
Roland Pabel
09cf89a951 with sudo, ignore EPERM for exec.LookPath 2025-10-20 16:29:36 +02:00
Roland Pabel
d6499935a4 enable sudo support 2025-10-20 16:29:36 +02:00
dependabot[bot]
3e19c47ae4 Bump github.com/ClusterCockpit/cc-lib from 0.9.1 to 0.10.1
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.9.1 to 0.10.1.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.9.1...v0.10.1)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.10.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 16:14:43 +02:00
brinkcoder
97e09f13f4 fix numastat collector sending node metrics instead of memoryDomain metrics 2025-10-20 16:12:58 +02:00
Roland Pabel
e08bd3d926 fix wrong variable in calculation of gpfs_reads_rate 2025-10-15 17:20:08 +02:00
dependabot[bot]
fc525b7430 Bump golang.org/x/sys from 0.36.0 to 0.37.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.36.0 to 0.37.0.
- [Commits](https://github.com/golang/sys/compare/v0.36.0...v0.37.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.37.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-13 12:04:12 +02:00
brinkcoder
69d4567ecf add support for passwordless sudo 2025-10-07 13:10:17 +02:00
brinkcoder
c5183feafc add slurm_cgroup Collector 2025-10-07 13:10:17 +02:00
dependabot[bot]
a45366646e Bump github.com/ClusterCockpit/cc-lib from 0.8.0 to 0.9.1
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.8.0 to 0.9.1.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.8.0...v0.9.1)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.9.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-06 12:07:06 +02:00
dependabot[bot]
a551616566 Bump github.com/ClusterCockpit/cc-lib from 0.7.0 to 0.8.0
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.7.0 to 0.8.0.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.7.0...v0.8.0)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-15 13:42:07 +02:00
dependabot[bot]
a9fa168117 Bump golang.org/x/sys from 0.35.0 to 0.36.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.35.0 to 0.36.0.
- [Commits](https://github.com/golang/sys/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.36.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-15 13:10:18 +02:00
Thomas Gruber
39d37597ab Update README.md 2025-09-09 14:43:07 +02:00
Thomas Gruber
aeaba0021b Update likwid_perfgroup_to_cc_config.py
Add "UMC" to socket-counters
2025-08-28 15:59:48 +02:00
dependabot[bot]
5ceffb44b4 Bump github.com/NVIDIA/go-nvml from 0.12.9-0 to 0.13.0-1
Bumps [github.com/NVIDIA/go-nvml](https://github.com/NVIDIA/go-nvml) from 0.12.9-0 to 0.13.0-1.
- [Release notes](https://github.com/NVIDIA/go-nvml/releases)
- [Commits](https://github.com/NVIDIA/go-nvml/compare/v0.12.9-0...v0.13.0-1)

---
updated-dependencies:
- dependency-name: github.com/NVIDIA/go-nvml
  dependency-version: 0.13.0-1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-25 14:14:21 +02:00
dependabot[bot]
e29942a4be Bump github.com/ClusterCockpit/cc-lib from 0.6.0 to 0.7.0
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.6.0 to 0.7.0.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.6.0...v0.7.0)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-11 12:24:44 +02:00
dependabot[bot]
0b9b9a6e68 Bump golang.org/x/sys from 0.34.0 to 0.35.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.34.0 to 0.35.0.
- [Commits](https://github.com/golang/sys/compare/v0.34.0...v0.35.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.35.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-11 12:24:27 +02:00
dependabot[bot]
b47cb3a0c4 Merge pull request #163 from ClusterCockpit/dependabot/go_modules/github.com/ClusterCockpit/cc-lib-0.6.0 2025-07-28 05:18:37 +00:00
dependabot[bot]
b49ae7b612 Bump github.com/ClusterCockpit/cc-lib from 0.5.0 to 0.6.0
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.5.0 to 0.6.0.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-28 04:49:00 +00:00
dependabot[bot]
1fc5cc8483 Bump golang.org/x/sys from 0.33.0 to 0.34.0 (#162)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.33.0 to 0.34.0.
- [Commits](https://github.com/golang/sys/compare/v0.33.0...v0.34.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.34.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-13 21:49:13 -07:00
dependabot[bot]
e81099af8d Bump github.com/NVIDIA/go-nvml from 0.12.4-1 to 0.12.9-0 (#159)
Bumps [github.com/NVIDIA/go-nvml](https://github.com/NVIDIA/go-nvml) from 0.12.4-1 to 0.12.9-0.
- [Release notes](https://github.com/NVIDIA/go-nvml/releases)
- [Commits](https://github.com/NVIDIA/go-nvml/compare/v0.12.4-1...v0.12.9-0)

---
updated-dependencies:
- dependency-name: github.com/NVIDIA/go-nvml
  dependency-version: 0.12.9-0
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 18:36:41 +02:00
dependabot[bot]
eaca327d73 Bump github.com/ClusterCockpit/cc-lib from 0.2.0 to 0.5.0 (#160)
Bumps [github.com/ClusterCockpit/cc-lib](https://github.com/ClusterCockpit/cc-lib) from 0.2.0 to 0.5.0.
- [Release notes](https://github.com/ClusterCockpit/cc-lib/releases)
- [Commits](https://github.com/ClusterCockpit/cc-lib/compare/v0.2.0...v0.5.0)

---
updated-dependencies:
- dependency-name: github.com/ClusterCockpit/cc-lib
  dependency-version: 0.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 18:36:31 +02:00
dependabot[bot]
2e48996d87 Bump github.com/fsnotify/fsnotify from 1.7.0 to 1.9.0 (#161)
Bumps [github.com/fsnotify/fsnotify](https://github.com/fsnotify/fsnotify) from 1.7.0 to 1.9.0.
- [Release notes](https://github.com/fsnotify/fsnotify/releases)
- [Changelog](https://github.com/fsnotify/fsnotify/blob/main/CHANGELOG.md)
- [Commits](https://github.com/fsnotify/fsnotify/compare/v1.7.0...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/fsnotify/fsnotify
  dependency-version: 1.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 18:36:21 +02:00
dependabot[bot]
7cdbada522 Bump github.com/tklauser/go-sysconf from 0.3.13 to 0.3.15 (#158)
Bumps [github.com/tklauser/go-sysconf](https://github.com/tklauser/go-sysconf) from 0.3.13 to 0.3.15.
- [Release notes](https://github.com/tklauser/go-sysconf/releases)
- [Commits](https://github.com/tklauser/go-sysconf/compare/v0.3.13...v0.3.15)

---
updated-dependencies:
- dependency-name: github.com/tklauser/go-sysconf
  dependency-version: 0.3.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 18:34:46 +02:00
dependabot[bot]
babe1e020d Bump github.com/PaesslerAG/gval from 1.2.2 to 1.2.4 (#157)
Bumps [github.com/PaesslerAG/gval](https://github.com/PaesslerAG/gval) from 1.2.2 to 1.2.4.
- [Release notes](https://github.com/PaesslerAG/gval/releases)
- [Commits](https://github.com/PaesslerAG/gval/compare/v1.2.2...v1.2.4)

---
updated-dependencies:
- dependency-name: github.com/PaesslerAG/gval
  dependency-version: 1.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-08 18:34:34 +02:00
oscarminus
776af72231 Add meta operations and total values as value per second (#151)
Co-authored-by: Michael Schwarz <schwarz@uni-paderborn.de>
2025-07-03 14:57:59 +02:00
Thomas Gruber
2d4894b8f7 Update dependabot.yml 2025-07-03 14:39:46 +02:00
Thomas Roehl
35295b0b3a Add dependabot config 2025-07-03 14:38:46 +02:00
Thomas Roehl
1e734baa35 Merge branch 'main' of github.com:ClusterCockpit/cc-metric-collector 2025-07-03 14:37:33 +02:00
Michael Schwarz
aa6181a018 Read written bytes instead of read bytes 2025-07-02 13:43:57 +02:00
Michael Panzlaff
0a2a85f2ce Add missing 'Section' and 'Priority' to .deb.control 2025-06-23 14:01:57 +02:00
Thomas Roehl
48f5afe2be Update cc-lib to 0.2.0 2025-06-18 12:22:07 +02:00
Thomas Gruber
979192af4e Fix Golang RPM URLs in Release Action 2025-06-17 11:59:53 +02:00
Thomas Gruber
c1032ff329 Fix '+' in Golang RPM URLs 2025-06-17 11:51:34 +02:00
Thomas Gruber
6b03d3aee8 Update golang RPM links in CI 2025-06-17 11:51:34 +02:00
Thomas Gruber
b9665d0d68 Numastats: Read in config and send abs values by default. Fixes #146 (#147) 2025-06-17 11:51:34 +02:00
Thomas Röhl
4c7a0e064f Add copyright header to Golang files 2025-06-17 11:51:34 +02:00
Thomas Röhl
d8f10384a1 Remove hostlist, now in cc-lib 2025-06-17 11:51:34 +02:00
Thomas Gruber
f74d856e69 Nvidia energy metrics to Nvidia collector (#144)
* Add energy metrics from NVML to Nvidia NVML collector

* Add energy metrics to Nvidia collector README
2025-06-17 11:51:34 +02:00
Thomas Roehl
fabb37ea70 Fix URL to new location of cc-units 2025-06-17 11:51:34 +02:00
Thomas Gruber
3a0f148728 Fix Typo in gpfsMetric.md 2025-06-17 10:35:51 +02:00
Thomas Roehl
ec34b40295 Merge branch 'main' of github.com:ClusterCockpit/cc-metric-collector 2025-04-17 11:38:03 +02:00
Thomas Gruber
03cd965099 Merge develop into main for documentation (#143)
* Fix Release part

* Fix Release part

* Update Hugo integration (#142)
2025-04-17 11:37:47 +02:00
Thomas Roehl
8ccbb4f69c Merge branch 'main' of github.com:ClusterCockpit/cc-metric-collector 2025-04-16 13:38:31 +02:00
Thomas Gruber
bd04e19c96 Merge development branch to main (#141)
* Remove go-toolkit as build requirement for RPM builds if run in CI

* Remove condition around BuildRequires and use go-toolkit for RPM builds

* use go-toolkit for RPM builds

* Install go-toolkit to fulfill build requirements for RPM

* Add golang-race for UBI9 and Alma9

* Fix wrongly named packages

* Fix wrongly named packages

* Fix Release part

* Fix Release part

* Fix documentation of RAPL collector

* Mark all JSON config fields of message processor as omitempty

* Generate HUGO inputs out of Markdown files

* Check creation of CCMessage in NATS receiver

* Use CCMessage FromBytes instead of Influx's decoder

* Rename 'process_message' to 'process_messages' in metricRouter config

This makes the behavior more consistent with the other modules, which
have their MessageProcessor named 'process_messages'. This most likely
was just a typo.

* Add optional interface alias in netstat (#130)

* Check creation of CCMessage in NATS receiver

* add optional interface aliases for netstatMetric

* small fix

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>

* Fix excluded metrics for diskstat and add exclude_mounts (#131)

* Check creation of CCMessage in NATS receiver

* fix excluded metrics and add optional mountpoint exclude

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>

* Add derived values for nfsiostat (#132)

* Check creation of CCMessage in NATS receiver

* add derived_values for nfsiostatMetric

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>

* Add exclude_devices to iostat (#133)

* Check creation of CCMessage in NATS receiver

* add exclude_device for iostatMetric

* add md file

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>

* Add derived_values for numastats (#134)

* Check creation of CCMessage in NATS receiver

* add derived_values for numastats

* change to ccMessage

* remove vim command artefact

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>

* Fix artifacts of not done cc-lib switch

* Fix artifacts in netstat collector of not done cc-lib switch

* Change to cc-lib (#135)

* Change to ccMessage from cc-lib

* Remove local development path

* Use receiver, sinks, ccLogger and ccConfig from cc-lib

* Fix ccLogger import path

* Update CI

* Delete mountpoint when it vanishes, not just its data (#137)

---------

Co-authored-by: Michael Panzlaff <michael.panzlaff@fau.de>
Co-authored-by: brinkcoder <Robert.Externbrink@ruhr-uni-bochum.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
2025-04-16 13:38:11 +02:00
Thomas Gruber
c9f2378813 Use timer_getCycleClock in likwidMetric (#139) 2025-04-16 13:06:27 +02:00
Thomas Roehl
f90c2698e3 Merge branch 'main' of github.com:ClusterCockpit/cc-metric-collector 2025-03-15 00:48:51 +01:00
brinkcoder
c1395ec2ed add links nfsiostat and schedstat (#129)
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
2025-02-19 11:31:49 +01:00
Thomas Roehl
16faa70867 Check creation of CCMessage in NATS receiver 2024-12-27 15:00:14 +00:00
87 changed files with 2511 additions and 1129 deletions

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -5,10 +5,10 @@ name: Release
# Run on tag push
on:
push:
tags:
- '**'
workflow_dispatch:
push:
tags:
- '**'
workflow_dispatch:
jobs:
@@ -36,22 +36,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo appstream install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -78,13 +70,13 @@ jobs:
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector RPM for AlmaLinux 8
path: ${{ steps.rpmrename.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector SRPM for AlmaLinux 8
path: ${{ steps.rpmrename.outputs.SRPM }}
@@ -114,23 +106,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo appstream install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -157,25 +140,26 @@ jobs:
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector RPM for AlmaLinux 9
path: ${{ steps.rpmrename.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector SRPM for AlmaLinux 9
path: ${{ steps.rpmrename.outputs.SRPM }}
overwrite: true
#
# Build on UBI 8 using go-toolset
# Build on Red Hat Universal Base Image (UBI 8) using go-toolset
#
UBI-8-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c35984d70cc534b3a3784e?container-tabs=gti
container: registry.access.redhat.com/ubi8/ubi:8.8-1032.1692772289
# See: https://catalog.redhat.com/en/search?searchType=Containers&q=Red+Hat+Universal+Base+Image+8
# https://hub.docker.com/r/redhat/ubi8
container: redhat/ubi8
# The job outputs link to the outputs of the 'rpmbuild' step
outputs:
rpm : ${{steps.rpmbuild.outputs.RPM}}
@@ -190,22 +174,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo ubi-8-appstream-rpms install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -215,24 +191,25 @@ jobs:
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector RPM for UBI 8
path: ${{ steps.rpmbuild.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector SRPM for UBI 8
path: ${{ steps.rpmbuild.outputs.SRPM }}
overwrite: true
#
# Build on UBI 9 using go-toolset
# Build on Red Hat Universal Base Image (UBI 9) using go-toolset
#
UBI-9-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
# See: https://catalog.redhat.com/en/search?searchType=Containers&q=Red+Hat+Universal+Base+Image+9
# https://hub.docker.com/r/redhat/ubi9
container: redhat/ubi9
# The job outputs link to the outputs of the 'rpmbuild' step
# The job outputs link to the outputs of the 'rpmbuild' step
@@ -243,30 +220,20 @@ jobs:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros gcc make python39 git wget openssl-devel diffutils delve
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros gcc make python39 git wget openssl-devel diffutils delve
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo ubi-9-appstream-rpms install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -276,13 +243,13 @@ jobs:
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector RPM for UBI 9
path: ${{ steps.rpmbuild.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector SRPM for UBI 9
path: ${{ steps.rpmbuild.outputs.SRPM }}
@@ -308,13 +275,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: 'stable'
@@ -332,13 +300,13 @@ jobs:
echo "DEB=${NEW_DEB_FILE}" >> $GITHUB_OUTPUT
# See: https://github.com/actions/upload-artifact
- name: Save DEB as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector DEB for Ubuntu 22.04
path: ${{ steps.debrename.outputs.DEB }}
overwrite: true
#
#
# Build on Ubuntu 24.04 using official go package
#
Ubuntu-noblenumbat-build:
@@ -358,13 +326,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: 'stable'
@@ -382,7 +351,7 @@ jobs:
echo "DEB=${NEW_DEB_FILE}" >> $GITHUB_OUTPUT
# See: https://github.com/actions/upload-artifact
- name: Save DEB as artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: cc-metric-collector DEB for Ubuntu 24.04
path: ${{ steps.debrename.outputs.DEB }}
@@ -400,48 +369,48 @@ jobs:
steps:
# See: https://github.com/actions/download-artifact
- name: Download AlmaLinux 8 RPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector RPM for AlmaLinux 8
- name: Download AlmaLinux 8 SRPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector SRPM for AlmaLinux 8
- name: Download AlmaLinux 9 RPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector RPM for AlmaLinux 9
- name: Download AlmaLinux 9 SRPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector SRPM for AlmaLinux 9
- name: Download UBI 8 RPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector RPM for UBI 8
- name: Download UBI 8 SRPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector SRPM for UBI 8
- name: Download UBI 9 RPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector RPM for UBI 9
- name: Download UBI 9 SRPM
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector SRPM for UBI 9
- name: Download Ubuntu 22.04 DEB
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector DEB for Ubuntu 22.04
- name: Download Ubuntu 24.04 DEB
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
name: cc-metric-collector DEB for Ubuntu 24.04

View File

@@ -20,25 +20,60 @@ jobs:
# See: https://github.com/marketplace/actions/checkout
# Checkout git repository and submodules
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '1.21'
go-version: 'stable'
check-latest: true
- name: Install reviewdog
run: |
go install github.com/reviewdog/reviewdog/cmd/reviewdog@latest
# See: https://staticcheck.io
- name: Install staticcheck
run: |
go install honnef.co/go/tools/cmd/staticcheck@latest
# See: https://golangci-lint.run
- name: Install GolangCI-Lint
run: |
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest
- name: Build MetricCollector
run: make
- name: Run MetricCollector once
run: ./cc-metric-collector --once --config .github/ci-config.json
# Running the linter requires likwid.h, which gets downloaded in the build step
- name: Static Analysis with GolangCI-Lint and Upload Report with reviewdog
run: |
golangci-lint run | reviewdog -f=golangci-lint -name "Check golangci-lint on build-latest" -reporter=github-check -filter-mode=nofilter -fail-level none
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Running the linter requires likwid.h, which gets downloaded in the build step
- name: Run Static Analysis with go vet and Upload Report with reviewdog
run: |
go vet ./... | reviewdog -f=govet -name "Check govet on build-latest" -reporter=github-check -filter-mode=nofilter -fail-level none
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Running the linter requires likwid.h, which gets downloaded in the build step
- name: Run Static Analysis with staticcheck and Upload Report with reviewdog
run: |
staticcheck ./... | reviewdog -f=staticcheck -name "Check staticcheck on build-latest" -reporter=github-check -filter-mode=nofilter -fail-level none
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
#
# Build on AlmaLinux 8
# Build on AlmaLinux 8 using go-toolset
#
AlmaLinux8-RPM-build:
runs-on: ubuntu-latest
@@ -58,23 +93,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo appstream install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -83,7 +109,7 @@ jobs:
make RPM
#
# Build on AlmaLinux 9
# Build on AlmaLinux 9 using go-toolset
#
AlmaLinux9-RPM-build:
runs-on: ubuntu-latest
@@ -103,24 +129,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo appstream install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -128,13 +144,49 @@ jobs:
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
#
# Build on AlmaLinux 10 using go-toolset
#
AlmaLinux10-RPM-build:
runs-on: ubuntu-latest
# See: https://hub.docker.com/_/almalinux
container: almalinux:10
# The job outputs link to the outputs of the 'rpmrename' step
# Only job outputs can be used in child jobs
steps:
# Use dnf to install development packages
- name: Install development packages
run: |
dnf --assumeyes group install "Development Tools" "RPM Development Tools"
dnf --assumeyes install wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager --enablerepo appstream install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
#
# Build on UBI 8 using go-toolset
# Build on Red Hat Universal Base Image (UBI 8) using go-toolset
#
UBI-8-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
# See: https://catalog.redhat.com/en/search?searchType=Containers&q=Red+Hat+Universal+Base+Image+8
# https://hub.docker.com/r/redhat/ubi8
container: redhat/ubi8
# The job outputs link to the outputs of the 'rpmbuild' step
steps:
@@ -147,23 +199,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo ubi-8-appstream-rpms install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -172,11 +215,12 @@ jobs:
make RPM
#
# Build on UBI 9 using go-toolset
# Build on Red Hat Universal Base Image (UBI 9) using go-toolset
#
UBI-9-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
# See: https://catalog.redhat.com/en/search?searchType=Containers&q=Red+Hat+Universal+Base+Image+9
# https://hub.docker.com/r/redhat/ubi9
container: redhat/ubi9
# The job outputs link to the outputs of the 'rpmbuild' step
steps:
@@ -189,24 +233,48 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
dnf --assumeyes --disableplugin=subscription-manager --enablerepo ubi-9-appstream-rpms install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
#
# Build on Red Hat Universal Base Image (UBI 10) using go-toolset
#
UBI-10-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/en/search?searchType=Containers&q=Red+Hat+Universal+Base+Image+10
# https://hub.docker.com/r/redhat/ubi10
container: redhat/ubi10
# The job outputs link to the outputs of the 'rpmbuild' step
steps:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros gcc make python3 git wget openssl-devel diffutils delve
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager --enablerepo ubi-10-for-x86_64-appstream-rpms install go-toolset
- name: RPM build MetricCollector
id: rpmbuild
@@ -231,14 +299,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: 'stable'
@@ -265,14 +333,14 @@ jobs:
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: 'stable'

View File

@@ -72,6 +72,11 @@ staticcheck:
$(GOBIN) install honnef.co/go/tools/cmd/staticcheck@latest
$$($(GOBIN) env GOPATH)/bin/staticcheck ./...
.PHONY: golangci-lint
golangci-lint:
$(GOBIN) install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest
$$($(GOBIN) env GOPATH)/bin/golangci-lint run
.ONESHELL:
.PHONY: RPM
RPM: scripts/cc-metric-collector.spec

View File

@@ -1,6 +1,17 @@
<!--
---
title: cc-metric-collector
description: Metric collecting node agent
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/_index.md
---
-->
# cc-metric-collector
A node agent for measuring, processing and forwarding node level metrics. It is part of the [ClusterCockpit ecosystem](./docs/introduction.md).
A node agent for measuring, processing and forwarding node level metrics. It is part of the [ClusterCockpit ecosystem](https://clustercockpit.org/docs/overview/).
The metric collector sends (and receives) metric in the [InfluxDB line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/) as it provides flexibility while providing a separation between tags (like index columns in relational databases) and fields (like data columns).
@@ -21,12 +32,14 @@ There is a main configuration file with basic settings that point to the other c
``` json
{
"sinks": "sinks.json",
"collectors" : "collectors.json",
"receivers" : "receivers.json",
"router" : "router.json",
"interval": "10s",
"duration": "1s"
"sinks-file": "sinks.json",
"collectors-file" : "collectors.json",
"receivers-file" : "receivers.json",
"router-file" : "router.json",
"main": {
"interval": "10s",
"duration": "1s"
}
}
```
@@ -35,8 +48,8 @@ The `interval` defines how often the metrics should be read and send to the sink
See the component READMEs for their configuration:
* [`collectors`](./collectors/README.md)
* [`sinks`](./sinks/README.md)
* [`receivers`](./receivers/README.md)
* [`sinks`](https://github.com/ClusterCockpit/cc-lib/blob/main/sinks/README.md)
* [`receivers`](https://github.com/ClusterCockpit/cc-lib/blob/main/receivers/README.md)
* [`router`](./internal/metricRouter/README.md)
# Installation

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package main
import (
@@ -7,17 +14,17 @@ import (
"os/signal"
"syscall"
"github.com/ClusterCockpit/cc-lib/receivers"
"github.com/ClusterCockpit/cc-lib/sinks"
"github.com/ClusterCockpit/cc-lib/v2/receivers"
"github.com/ClusterCockpit/cc-lib/v2/sinks"
"github.com/ClusterCockpit/cc-metric-collector/collectors"
// "strings"
"sync"
"time"
ccconf "github.com/ClusterCockpit/cc-lib/ccConfig"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
ccconf "github.com/ClusterCockpit/cc-lib/v2/ccConfig"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
mr "github.com/ClusterCockpit/cc-metric-collector/internal/metricRouter"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)

View File

@@ -1,3 +1,14 @@
<!--
---
title: Metric Collectors
description: Metric collectors for cc-metric-collector
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/_index.md
---
-->
# CCMetric collectors
This folder contains the collectors for the cc-metric-collector.
@@ -23,7 +34,6 @@ In contrast to the configuration files for sinks and receivers, the collectors c
* [`loadavg`](./loadavgMetric.md)
* [`netstat`](./netstatMetric.md)
* [`ibstat`](./infinibandMetric.md)
* [`ibstat_perfquery`](./infinibandPerfQueryMetric.md)
* [`tempstat`](./tempMetric.md)
* [`lustrestat`](./lustreMetric.md)
* [`likwid`](./likwidMetric.md)
@@ -33,13 +43,16 @@ In contrast to the configuration files for sinks and receivers, the collectors c
* [`topprocs`](./topprocsMetric.md)
* [`nfs3stat`](./nfs3Metric.md)
* [`nfs4stat`](./nfs4Metric.md)
* [`nfsiostat`](./nfsiostatMetric.md)
* [`cpufreq`](./cpufreqMetric.md)
* [`cpufreq_cpuinfo`](./cpufreqCpuinfoMetric.md)
* [`schedstat`](./schedstatMetric.md)
* [`numastats`](./numastatsMetric.md)
* [`gpfs`](./gpfsMetric.md)
* [`beegfs_meta`](./beegfsmetaMetric.md)
* [`beegfs_storage`](./beegfsstorageMetric.md)
* [`rocm_smi`](./rocmsmiMetric.md)
* [`slurm_cgroup`](./slurmCgroupMetric.md)
## Todos
@@ -51,10 +64,10 @@ A collector reads data from any source, parses it to metrics and submits these m
* `Name() string`: Return the name of the collector
* `Init(config json.RawMessage) error`: Initializes the collector using the given collector-specific config in JSON. Check if needed files/commands exists, ...
* `Initialized() bool`: Check if a collector is successfully initialized
* `Read(duration time.Duration, output chan ccMetric.CCMetric)`: Read, parse and submit data to the `output` channel as [`CCMetric`](../internal/ccMetric/README.md). If the collector has to measure anything for some duration, use the provided function argument `duration`.
* `Read(duration time.Duration, output chan ccMessage.CCMessage)`: Read, parse and submit data to the `output` channel as [`CCMessage`](https://github.com/ClusterCockpit/cc-lib/blob/main/ccMessage/README.md). If the collector has to measure anything for some duration, use the provided function argument `duration`.
* `Close()`: Closes down the collector.
It is recommanded to call `setup()` in the `Init()` function.
It is recommended to call `setup()` in the `Init()` function.
Finally, the collector needs to be registered in the `collectorManager.go`. There is a list of collectors called `AvailableCollectors` which is a map (`collector_type_string` -> `pointer to MetricCollector interface`). Add a new entry with a descriptive name and the new collector.
@@ -87,11 +100,12 @@ func (m *SampleCollector) Init(config json.RawMessage) error {
}
m.name = "SampleCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
return err
if err := json.Unmarshal(config, &m.config); err != nil {
return fmt.Errorf("%s Init(): json.Unmarshal() call failed: %w", m.name, err)
}
}
m.meta = map[string]string{"source": m.name, "group": "Sample"}

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -10,12 +17,13 @@ import (
"os/exec"
"os/user"
"regexp"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const DEFAULT_BEEGFS_CMD = "beegfs-ctl"
@@ -54,7 +62,9 @@ func (m *BeegfsMetaCollector) Init(config json.RawMessage) error {
"rmXA", "setXA", "mirror"}
m.name = "BeegfsMetaCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
// Set default beegfs-ctl binary
@@ -71,8 +81,7 @@ func (m *BeegfsMetaCollector) Init(config json.RawMessage) error {
//create map with possible variables
m.matches = make(map[string]string)
for _, value := range nodeMdstat_array {
_, skip := stringArrayContains(m.config.ExcludeMetrics, value)
if skip {
if slices.Contains(m.config.ExcludeMetrics, value) {
m.matches["other"] = "0"
} else {
m.matches["beegfs_cmeta_"+value] = "0"

View File

@@ -1,5 +1,17 @@
<!--
---
title: BeeGFS metadata metric collector
description: Collect metadata clientstats for `BeeGFS on Demand`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/beegfsmeta.md
---
-->
## `BeeGFS on Demand` collector
This Collector is to collect BeeGFS on Demand (BeeOND) metadata clientstats.
This Collector is to collect `BeeGFS on Demand` (BeeOND) metadata clientstats.
```json
"beegfs_meta": {
@@ -72,4 +84,4 @@ Available Metrics:
* setXA
* mirror
The collector adds a `filesystem` tag to all metrics
The collector adds a `filesystem` tag to all metrics

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -10,12 +17,13 @@ import (
"os/exec"
"os/user"
"regexp"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// Struct for the collector-specific JSON config
@@ -47,7 +55,9 @@ func (m *BeegfsStorageCollector) Init(config json.RawMessage) error {
"storInf", "unlnk"}
m.name = "BeegfsStorageCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
// Set default beegfs-ctl binary
@@ -64,8 +74,7 @@ func (m *BeegfsStorageCollector) Init(config json.RawMessage) error {
//create map with possible variables
m.matches = make(map[string]string)
for _, value := range storageStat_array {
_, skip := stringArrayContains(m.config.ExcludeMetrics, value)
if skip {
if slices.Contains(m.config.ExcludeMetrics, value) {
m.matches["other"] = "0"
} else {
m.matches["beegfs_cstorage_"+value] = "0"

View File

@@ -1,3 +1,14 @@
<!--
---
title: "BeeGFS on Demand metric collector"
description: Collect performance metrics for BeeGFS filesystems
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/beegfsstorage.md
---
-->
## `BeeGFS on Demand` collector
This Collector is to collect BeeGFS on Demand (BeeOND) storage stats.
@@ -52,4 +63,4 @@ Available Metrics:
* "unlnk"
The collector adds a `filesystem` tag to all metrics
The collector adds a `filesystem` tag to all metrics

View File

@@ -1,12 +1,20 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"fmt"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
@@ -40,6 +48,7 @@ var AvailableCollectors = map[string]MetricCollector{
"self": new(SelfCollector),
"schedstat": new(SchedstatCollector),
"nfsiostat": new(NfsIOStatCollector),
"slurm_cgroup": new(SlurmCgroupCollector),
}
// Metric collector manager data structure
@@ -96,7 +105,7 @@ func (cm *collectorManager) Init(ticker mct.MultiChanTicker, duration time.Durat
err = collector.Init(collectorCfg)
if err != nil {
cclog.ComponentError("CollectorManager", "Collector", collectorName, "initialization failed:", err.Error())
cclog.ComponentError("CollectorManager", fmt.Sprintf("Collector %s initialization failed: %v", collectorName, err))
continue
}
cclog.ComponentDebug("CollectorManager", "ADD COLLECTOR", collector.Name())

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -10,8 +17,8 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// CPUFreqCollector
@@ -34,9 +41,10 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
return nil
}
m.setup()
m.name = "CPUFreqCpuInfoCollector"
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{
"source": m.name,
@@ -49,7 +57,6 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
if err != nil {
return fmt.Errorf("failed to open file '%s': %v", cpuInfoFile, err)
}
defer file.Close()
// Collect topology information from file cpuinfo
foundFreq := false
@@ -79,6 +86,10 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
}
}
if err := file.Close(); err != nil {
return fmt.Errorf("%s Init(): Call to file.Close() failed: %w", m.name, err)
}
// were all topology information collected?
if foundFreq &&
len(processor) > 0 &&
@@ -133,7 +144,13 @@ func (m *CPUFreqCpuInfoCollector) Read(interval time.Duration, output chan lp.CC
fmt.Sprintf("Read(): Failed to open file '%s': %v", cpuInfoFile, err))
return
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", cpuInfoFile, err))
}
}()
processorCounter := 0
now := time.Now()

View File

@@ -1,3 +1,14 @@
<!--
---
title: CPU frequency metric collector through cpuinfo
description: Collect the CPU frequency from `/proc/cpuinfo`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/cpufreq_cpuinfo.md
---
-->
## `cpufreq_cpuinfo` collector
```json

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -9,8 +16,8 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
"github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
"golang.org/x/sys/unix"
)
@@ -41,7 +48,9 @@ func (m *CPUFreqCollector) Init(config json.RawMessage) error {
}
m.name = "CPUFreqCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)

View File

@@ -1,3 +1,14 @@
<!--
---
title: CPU frequency metric collector through sysfs
description: Collect the CPU frequency metrics from `/sys/.../cpu/.../cpufreq`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/cpufreq.md
---
-->
## `cpufreq_cpuinfo` collector
```json

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -5,12 +12,13 @@ import (
"encoding/json"
"fmt"
"os"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
sysconf "github.com/tklauser/go-sysconf"
)
@@ -32,10 +40,17 @@ type CpustatCollector struct {
func (m *CpustatCollector) Init(config json.RawMessage) error {
m.name = "CpustatCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "CPU"}
m.nodetags = map[string]string{"type": "node"}
m.meta = map[string]string{
"source": m.name,
"group": "CPU",
}
m.nodetags = map[string]string{
"type": "node",
}
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
@@ -57,14 +72,7 @@ func (m *CpustatCollector) Init(config json.RawMessage) error {
m.matches = make(map[string]int)
for match, index := range matches {
doExclude := false
for _, exclude := range m.config.ExcludeMetrics {
if match == exclude {
doExclude = true
break
}
}
if !doExclude {
if !slices.Contains(m.config.ExcludeMetrics, match) {
m.matches[match] = index
}
}
@@ -72,9 +80,17 @@ func (m *CpustatCollector) Init(config json.RawMessage) error {
// Check input file
file, err := os.Open(string(CPUSTATFILE))
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("Init(): Failed to open file '%s': %v", string(CPUSTATFILE), err))
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Init(): Failed to close file '%s': %v", string(CPUSTATFILE), err))
}
}()
// Pre-generate tags for all CPUs
num_cpus := 0
@@ -148,9 +164,17 @@ func (m *CpustatCollector) Read(interval time.Duration, output chan lp.CCMessage
file, err := os.Open(string(CPUSTATFILE))
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to open file '%s': %v", string(CPUSTATFILE), err))
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", string(CPUSTATFILE), err))
}
}()
scanner := bufio.NewScanner(file)
for scanner.Scan() {

View File

@@ -1,3 +1,14 @@
<!--
---
title: CPU usage metric collector
description: Collect CPU metrics from `/proc/stat`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/cpustat.md
---
-->
## `cpustat` collector
@@ -24,4 +35,4 @@ Metrics:
* `cpu_guest` with `unit=Percent`
* `cpu_guest_nice` with `unit=Percent`
* `cpu_used` = `cpu_* - cpu_idle` with `unit=Percent`
* `num_cpus`
* `num_cpus`

View File

@@ -1,15 +1,24 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"errors"
"fmt"
"log"
"os"
"os/exec"
"slices"
"strings"
"time"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
influx "github.com/influxdata/line-protocol"
)
@@ -42,11 +51,16 @@ func (m *CustomCmdCollector) Init(config json.RawMessage) error {
return err
}
}
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
for _, c := range m.config.Commands {
cmdfields := strings.Fields(c)
command := exec.Command(cmdfields[0], strings.Join(cmdfields[1:], " "))
command.Wait()
command := exec.Command(cmdfields[0], cmdfields[1:]...)
if err := command.Wait(); err != nil {
log.Print(err)
continue
}
_, err = command.Output()
if err == nil {
m.commands = append(m.commands, c)
@@ -81,8 +95,11 @@ func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMessa
}
for _, cmd := range m.commands {
cmdfields := strings.Fields(cmd)
command := exec.Command(cmdfields[0], strings.Join(cmdfields[1:], " "))
command.Wait()
command := exec.Command(cmdfields[0], cmdfields[1:]...)
if err := command.Wait(); err != nil {
log.Print(err)
continue
}
stdout, err := command.Output()
if err != nil {
log.Print(err)
@@ -94,8 +111,7 @@ func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMessa
continue
}
for _, c := range cmdmetrics {
_, skip := stringArrayContains(m.config.ExcludeMetrics, c.Name())
if skip {
if slices.Contains(m.config.ExcludeMetrics, c.Name()) {
continue
}
@@ -114,8 +130,7 @@ func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMessa
continue
}
for _, f := range fmetrics {
_, skip := stringArrayContains(m.config.ExcludeMetrics, f.Name())
if skip {
if slices.Contains(m.config.ExcludeMetrics, f.Name()) {
continue
}
output <- lp.FromInfluxMetric(f)

View File

@@ -1,3 +1,13 @@
<!--
---
title: CustomCommand metric collector
description: Collect messages from custom command or files
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/customcmd.md
---
-->
## `customcmd` collector

View File

@@ -1,15 +1,23 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"bufio"
"encoding/json"
"fmt"
"os"
"strings"
"syscall"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const MOUNTFILE = `/proc/self/mounts`
@@ -29,7 +37,9 @@ func (m *DiskstatCollector) Init(config json.RawMessage) error {
m.name = "DiskstatCollector"
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Disk"}
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if len(config) > 0 {
if err := json.Unmarshal(config, &m.config); err != nil {
return err
@@ -47,10 +57,11 @@ func (m *DiskstatCollector) Init(config json.RawMessage) error {
}
file, err := os.Open(MOUNTFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return err
return fmt.Errorf("%s Init(): file open for file \"%s\" failed: %w", m.name, MOUNTFILE, err)
}
if err := file.Close(); err != nil {
return fmt.Errorf("%s Init(): file close for file \"%s\" failed: %w", m.name, MOUNTFILE, err)
}
defer file.Close()
m.init = true
return nil
}
@@ -62,10 +73,18 @@ func (m *DiskstatCollector) Read(interval time.Duration, output chan lp.CCMessag
file, err := os.Open(MOUNTFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to open file '%s': %v", MOUNTFILE, err))
return
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", MOUNTFILE, err))
}
}()
part_max_used := uint64(0)
scanner := bufio.NewScanner(file)
@@ -86,7 +105,7 @@ mountLoop:
continue
}
mountPath := strings.Replace(linefields[1], `\040`, " ", -1)
mountPath := strings.ReplaceAll(linefields[1], `\040`, " ")
for _, excl := range m.config.ExcludeMounts {
if strings.Contains(mountPath, excl) {

View File

@@ -1,3 +1,13 @@
<!--
---
title: Disk usage statistics metric collector
description: Collect metrics for various filesystems from `/proc/self/mounts`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/diskstat.md
---
-->
## `diskstat` collector

View File

@@ -1,41 +1,308 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"bufio"
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"os/exec"
"os/user"
"slices"
"strconv"
"strings"
"syscall"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const DEFAULT_GPFS_CMD = "mmpmon"
type GpfsCollectorLastState struct {
bytesRead int64
bytesWritten int64
type GpfsCollectorState map[string]int64
type GpfsCollectorConfig struct {
Mmpmon string `json:"mmpmon_path,omitempty"`
ExcludeFilesystem []string `json:"exclude_filesystem,omitempty"`
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
Sudo bool `json:"use_sudo,omitempty"`
SendAbsoluteValues bool `json:"send_abs_values,omitempty"`
SendDiffValues bool `json:"send_diff_values,omitempty"`
SendDerivedValues bool `json:"send_derived_values,omitempty"`
SendTotalValues bool `json:"send_total_values,omitempty"`
SendBandwidths bool `json:"send_bandwidths,omitempty"`
}
type GpfsMetricDefinition struct {
name string
desc string
prefix string
unit string
calc string
}
type GpfsCollector struct {
metricCollector
tags map[string]string
config struct {
Mmpmon string `json:"mmpmon_path,omitempty"`
ExcludeFilesystem []string `json:"exclude_filesystem,omitempty"`
SendBandwidths bool `json:"send_bandwidths"`
SendTotalValues bool `json:"send_total_values"`
}
tags map[string]string
config GpfsCollectorConfig
sudoCmd string
skipFS map[string]struct{}
lastTimestamp time.Time // Store time stamp of last tick to derive bandwidths
lastState map[string]GpfsCollectorLastState
lastTimestamp map[string]time.Time // Store timestamp of lastState per filesystem to derive bandwidths
definitions []GpfsMetricDefinition // all metrics to report
lastState map[string]GpfsCollectorState // one GpfsCollectorState per filesystem
}
var GpfsAbsMetrics = []GpfsMetricDefinition{
{
name: "gpfs_num_opens",
desc: "number of opens",
prefix: "_oc_",
unit: "requests",
calc: "none",
},
{
name: "gpfs_num_closes",
desc: "number of closes",
prefix: "_cc_",
unit: "requests",
calc: "none",
},
{
name: "gpfs_num_reads",
desc: "number of reads",
prefix: "_rdc_",
unit: "requests",
calc: "none",
},
{
name: "gpfs_num_writes",
desc: "number of writes",
prefix: "_wc_",
unit: "requests",
calc: "none",
},
{
name: "gpfs_num_readdirs",
desc: "number of readdirs",
prefix: "_dir_",
unit: "requests",
calc: "none",
},
{
name: "gpfs_num_inode_updates",
desc: "number of Inode Updates",
prefix: "_iu_",
unit: "requests",
calc: "none",
},
{
name: "gpfs_bytes_read",
desc: "bytes read",
prefix: "_br_",
unit: "bytes",
calc: "none",
},
{
name: "gpfs_bytes_written",
desc: "bytes written",
prefix: "_bw_",
unit: "bytes",
calc: "none",
},
}
var GpfsDiffMetrics = []GpfsMetricDefinition{
{
name: "gpfs_num_opens_diff",
desc: "number of opens (diff)",
prefix: "_oc_",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_num_closes_diff",
desc: "number of closes (diff)",
prefix: "_cc_",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_num_reads_diff",
desc: "number of reads (diff)",
prefix: "_rdc_",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_num_writes_diff",
desc: "number of writes (diff)",
prefix: "_wc_",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_num_readdirs_diff",
desc: "number of readdirs (diff)",
prefix: "_dir_",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_num_inode_updates_diff",
desc: "number of Inode Updates (diff)",
prefix: "_iu_",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_bytes_read_diff",
desc: "bytes read (diff)",
prefix: "_br_",
unit: "bytes",
calc: "difference",
},
{
name: "gpfs_bytes_written_diff",
desc: "bytes written (diff)",
prefix: "_bw_",
unit: "bytes",
calc: "difference",
},
}
var GpfsDeriveMetrics = []GpfsMetricDefinition{
{
name: "gpfs_opens_rate",
desc: "number of opens (rate)",
prefix: "_oc_",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_closes_rate",
desc: "number of closes (rate)",
prefix: "_oc_",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_reads_rate",
desc: "number of reads (rate)",
prefix: "_rdc_",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_writes_rate",
desc: "number of writes (rate)",
prefix: "_wc_",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_readdirs_rate",
desc: "number of readdirs (rate)",
prefix: "_dir_",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_inode_updates_rate",
desc: "number of Inode Updates (rate)",
prefix: "_iu_",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_bw_read",
desc: "bytes read (rate)",
prefix: "_br_",
unit: "bytes/sec",
calc: "derivative",
},
{
name: "gpfs_bw_write",
desc: "bytes written (rate)",
prefix: "_bw_",
unit: "bytes/sec",
calc: "derivative",
},
}
var GpfsTotalMetrics = []GpfsMetricDefinition{
{
name: "gpfs_bytes_total",
desc: "bytes total",
prefix: "bytesTotal",
unit: "bytes",
calc: "none",
},
{
name: "gpfs_bytes_total_diff",
desc: "bytes total (diff)",
prefix: "bytesTotal",
unit: "bytes",
calc: "difference",
},
{
name: "gpfs_bw_total",
desc: "bytes total (rate)",
prefix: "bytesTotal",
unit: "bytes/sec",
calc: "derivative",
},
{
name: "gpfs_iops",
desc: "iops",
prefix: "iops",
unit: "requests",
calc: "none",
},
{
name: "gpfs_iops_diff",
desc: "iops (diff)",
prefix: "iops",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_iops_rate",
desc: "iops (rate)",
prefix: "iops",
unit: "requests/sec",
calc: "derivative",
},
{
name: "gpfs_metaops",
desc: "metaops",
prefix: "metaops",
unit: "requests",
calc: "none",
},
{
name: "gpfs_metaops_diff",
desc: "metaops (diff)",
prefix: "metaops",
unit: "requests",
calc: "difference",
},
{
name: "gpfs_metaops_rate",
desc: "metaops (rate)",
prefix: "metaops",
unit: "requests/sec",
calc: "derivative",
},
}
func (m *GpfsCollector) Init(config json.RawMessage) error {
@@ -44,9 +311,10 @@ func (m *GpfsCollector) Init(config json.RawMessage) error {
return nil
}
var err error
m.name = "GpfsCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
// Set default mmpmon binary
@@ -54,7 +322,7 @@ func (m *GpfsCollector) Init(config json.RawMessage) error {
// Read JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
err := json.Unmarshal(config, &m.config)
if err != nil {
log.Print(err.Error())
return err
@@ -72,24 +340,104 @@ func (m *GpfsCollector) Init(config json.RawMessage) error {
for _, fs := range m.config.ExcludeFilesystem {
m.skipFS[fs] = struct{}{}
}
m.lastState = make(map[string]GpfsCollectorLastState)
m.lastState = make(map[string]GpfsCollectorState)
m.lastTimestamp = make(map[string]time.Time)
// GPFS / IBM Spectrum Scale file system statistics can only be queried by user root
user, err := user.Current()
if err != nil {
return fmt.Errorf("failed to get current user: %v", err)
if !m.config.Sudo {
user, err := user.Current()
if err != nil {
cclog.ComponentError(m.name, "Failed to get current user:", err.Error())
return err
}
if user.Uid != "0" {
cclog.ComponentError(m.name, "GPFS file system statistics can only be queried by user root")
return err
}
} else {
p, err := exec.LookPath("sudo")
if err != nil {
cclog.ComponentError(m.name, "Cannot find 'sudo'")
return err
}
m.sudoCmd = p
}
if user.Uid != "0" {
return fmt.Errorf("GPFS file system statistics can only be queried by user root")
// when using sudo, the full path of mmpmon must be specified because
// exec.LookPath will not work as mmpmon is not executable as user
if m.config.Sudo && !strings.HasPrefix(m.config.Mmpmon, "/") {
return fmt.Errorf("when using sudo, mmpmon_path must be provided and an absolute path: %s", m.config.Mmpmon)
}
// Check if mmpmon is in executable search path
p, err := exec.LookPath(m.config.Mmpmon)
if err != nil {
return fmt.Errorf("failed to find mmpmon binary '%s': %v", m.config.Mmpmon, err)
// if using sudo, exec.lookPath will return EACCES (file mode r-x------), this can be ignored
if m.config.Sudo && errors.Is(err, syscall.EACCES) {
cclog.ComponentWarn(m.name, fmt.Sprintf("got error looking for mmpmon binary '%s': %v . This is expected when using sudo, continuing.", m.config.Mmpmon, err))
// the file was given in the config, use it
p = m.config.Mmpmon
} else {
cclog.ComponentError(m.name, fmt.Sprintf("failed to find mmpmon binary '%s': %v", m.config.Mmpmon, err))
return fmt.Errorf("failed to find mmpmon binary '%s': %v", m.config.Mmpmon, err)
}
}
m.config.Mmpmon = p
m.definitions = []GpfsMetricDefinition{}
if m.config.SendAbsoluteValues {
for _, def := range GpfsAbsMetrics {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
}
if m.config.SendDiffValues {
for _, def := range GpfsDiffMetrics {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
}
if m.config.SendDerivedValues {
for _, def := range GpfsDeriveMetrics {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
} else if m.config.SendBandwidths {
for _, def := range GpfsDeriveMetrics {
if def.unit == "bytes/sec" {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
}
}
if m.config.SendTotalValues {
for _, def := range GpfsTotalMetrics {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
// only send total metrics of the types requested
if (def.calc == "none" && m.config.SendAbsoluteValues) ||
(def.calc == "difference" && m.config.SendDiffValues) ||
(def.calc == "derivative" && m.config.SendDerivedValues) {
m.definitions = append(m.definitions, def)
}
}
}
} else if m.config.SendBandwidths {
for _, def := range GpfsTotalMetrics {
if def.unit == "bytes/sec" {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
}
}
if len(m.definitions) == 0 {
return errors.New("no metrics to collect")
}
m.init = true
return nil
}
@@ -100,18 +448,17 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
return
}
// Current time stamp
now := time.Now()
// time difference to last time stamp
timeDiff := now.Sub(m.lastTimestamp).Seconds()
// Save current timestamp
m.lastTimestamp = now
// mmpmon:
// -p: generate output that can be parsed
// -s: suppress the prompt on input
// fs_io_s: Displays I/O statistics per mounted file system
cmd := exec.Command(m.config.Mmpmon, "-p", "-s")
var cmd *exec.Cmd
if m.config.Sudo {
cmd = exec.Command(m.sudoCmd, m.config.Mmpmon, "-p", "-s")
} else {
cmd = exec.Command(m.config.Mmpmon, "-p", "-s")
}
cmd.Stdin = strings.NewReader("once fs_io_s\n")
cmdStdout := new(bytes.Buffer)
cmdStderr := new(bytes.Buffer)
@@ -154,9 +501,7 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
filesystem, ok := key_value["_fs_"]
if !ok {
cclog.ComponentError(
m.name,
"Read(): Failed to get filesystem name.")
cclog.ComponentError(m.name, "Read(): Failed to get filesystem name.")
continue
}
@@ -168,245 +513,141 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Add filesystem tag
m.tags["filesystem"] = filesystem
// Create initial last state
if m.config.SendBandwidths {
if _, ok := m.lastState[filesystem]; !ok {
m.lastState[filesystem] = GpfsCollectorLastState{
bytesRead: -1,
bytesWritten: -1,
}
}
if _, ok := m.lastState[filesystem]; !ok {
m.lastState[filesystem] = make(GpfsCollectorState)
}
// read the new values from mmpmon
// return code
rc, err := strconv.Atoi(key_value["_rc_"])
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert return code '%s' to int: %v", key_value["_rc_"], err))
cclog.ComponentError(m.name, fmt.Sprintf("Read(): Failed to convert return code '%s' to int: %v", key_value["_rc_"], err))
continue
}
if rc != 0 {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Filesystem '%s' is not ok.", filesystem))
cclog.ComponentError(m.name, fmt.Sprintf("Read(): Filesystem '%s' is not ok.", filesystem))
continue
}
// timestamp
sec, err := strconv.ParseInt(key_value["_t_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert seconds '%s' to int64: %v", key_value["_t_"], err))
cclog.ComponentError(m.name, fmt.Sprintf("Read(): Failed to convert seconds '%s' to int64: %v", key_value["_t_"], err))
continue
}
msec, err := strconv.ParseInt(key_value["_tu_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert micro seconds '%s' to int64: %v", key_value["_tu_"], err))
cclog.ComponentError(m.name, fmt.Sprintf("Read(): Failed to convert micro seconds '%s' to int64: %v", key_value["_tu_"], err))
continue
}
timestamp := time.Unix(sec, msec*1000)
// bytes read
bytesRead, err := strconv.ParseInt(key_value["_br_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert bytes read '%s' to int64: %v", key_value["_br_"], err))
continue
// time difference to last time stamp
var timeDiff float64 = 0
if lastTime, ok := m.lastTimestamp[filesystem]; !ok {
m.lastTimestamp[filesystem] = time.Time{}
} else {
timeDiff = timestamp.Sub(lastTime).Seconds()
}
if y, err :=
lp.NewMessage(
"gpfs_bytes_read",
m.tags,
m.meta,
map[string]interface{}{
"value": bytesRead,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes")
output <- y
// get values of all abs metrics
newstate := make(GpfsCollectorState)
for _, metric := range GpfsAbsMetrics {
value, err := strconv.ParseInt(key_value[metric.prefix], 10, 64)
if err != nil {
cclog.ComponentError(m.name, fmt.Sprintf("Read(): Failed to convert %s '%s' to int64: %v", metric.desc, key_value[metric.prefix], err))
continue
}
newstate[metric.prefix] = value
}
if m.config.SendBandwidths {
if lastBytesRead := m.lastState[filesystem].bytesRead; lastBytesRead >= 0 {
bwRead := float64(bytesRead-lastBytesRead) / timeDiff
if y, err :=
lp.NewMessage(
"gpfs_bw_read",
m.tags,
m.meta,
map[string]interface{}{
"value": bwRead,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes/sec")
output <- y
// compute total metrics (map[...] will return 0 if key not found)
// bytes read and written
if br, br_ok := newstate["_br_"]; br_ok {
newstate["bytesTotal"] = newstate["bytesTotal"] + br
}
if bw, bw_ok := newstate["_bw_"]; bw_ok {
newstate["bytesTotal"] = newstate["bytesTotal"] + bw
}
// read and write count
if rdc, rdc_ok := newstate["_rdc_"]; rdc_ok {
newstate["iops"] = newstate["iops"] + rdc
}
if wc, wc_ok := newstate["_wc_"]; wc_ok {
newstate["iops"] = newstate["iops"] + wc
}
// meta operations
if oc, oc_ok := newstate["_oc_"]; oc_ok {
newstate["metaops"] = newstate["metaops"] + oc
}
if cc, cc_ok := newstate["_cc_"]; cc_ok {
newstate["metaops"] = newstate["metaops"] + cc
}
if dir, dir_ok := newstate["_dir_"]; dir_ok {
newstate["metaops"] = newstate["metaops"] + dir
}
if iu, iu_ok := newstate["_iu_"]; iu_ok {
newstate["metaops"] = newstate["metaops"] + iu
}
// send desired metrics for this filesystem
for _, metric := range m.definitions {
vold, vold_ok := m.lastState[filesystem][metric.prefix]
vnew, vnew_ok := newstate[metric.prefix]
var value interface{}
value_ok := false
switch metric.calc {
case "none":
if vnew_ok {
value = vnew
value_ok = true
} else if vold_ok {
// for absolute values, if the new value is not available, report no change
value = vold
value_ok = true
}
case "difference":
if vnew_ok && vold_ok {
value = vnew - vold
if value.(int64) < 0 {
value = 0
}
value_ok = true
} else if vold_ok {
// if the difference is not computable, return 0
value = 0
value_ok = true
}
case "derivative":
if vnew_ok && vold_ok && timeDiff > 0 {
value = float64(vnew-vold) / timeDiff
if value.(float64) < 0 {
value = 0
}
value_ok = true
} else if vold_ok {
// if the difference is not computable, return 0
value = 0
value_ok = true
}
}
}
// bytes written
bytesWritten, err := strconv.ParseInt(key_value["_bw_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert bytes written '%s' to int64: %v", key_value["_bw_"], err))
continue
}
if y, err :=
lp.NewMessage(
"gpfs_bytes_written",
m.tags,
m.meta,
map[string]interface{}{
"value": bytesWritten,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes")
output <- y
}
if m.config.SendBandwidths {
if lastBytesWritten := m.lastState[filesystem].bytesRead; lastBytesWritten >= 0 {
bwWrite := float64(bytesWritten-lastBytesWritten) / timeDiff
if y, err :=
lp.NewMessage(
"gpfs_bw_write",
m.tags,
m.meta,
map[string]interface{}{
"value": bwWrite,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes/sec")
if value_ok {
y, err := lp.NewMetric(metric.name, m.tags, m.meta, value, timestamp)
if err == nil {
if len(metric.unit) > 0 {
y.AddMeta("unit", metric.unit)
}
output <- y
}
} else {
// the value could not be computed correctly
cclog.ComponentWarn(m.name, fmt.Sprintf("Read(): Could not compute value for filesystem %s of metric %s: vold_ok = %t, vnew_ok = %t", filesystem, metric.name, vold_ok, vnew_ok))
}
}
if m.config.SendBandwidths {
m.lastState[filesystem] = GpfsCollectorLastState{
bytesRead: bytesRead,
bytesWritten: bytesWritten,
}
}
// number of opens
numOpens, err := strconv.ParseInt(key_value["_oc_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert number of opens '%s' to int64: %v", key_value["_oc_"], err))
continue
}
if y, err := lp.NewMessage("gpfs_num_opens", m.tags, m.meta, map[string]interface{}{"value": numOpens}, timestamp); err == nil {
output <- y
}
// number of closes
numCloses, err := strconv.ParseInt(key_value["_cc_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert number of closes: '%s' to int64: %v", key_value["_cc_"], err))
continue
}
if y, err := lp.NewMessage("gpfs_num_closes", m.tags, m.meta, map[string]interface{}{"value": numCloses}, timestamp); err == nil {
output <- y
}
// number of reads
numReads, err := strconv.ParseInt(key_value["_rdc_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert number of reads: '%s' to int64: %v", key_value["_rdc_"], err))
continue
}
if y, err := lp.NewMessage("gpfs_num_reads", m.tags, m.meta, map[string]interface{}{"value": numReads}, timestamp); err == nil {
output <- y
}
// number of writes
numWrites, err := strconv.ParseInt(key_value["_wc_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert number of writes: '%s' to int64: %v", key_value["_wc_"], err))
continue
}
if y, err := lp.NewMessage("gpfs_num_writes", m.tags, m.meta, map[string]interface{}{"value": numWrites}, timestamp); err == nil {
output <- y
}
// number of read directories
numReaddirs, err := strconv.ParseInt(key_value["_dir_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert number of read directories: '%s' to int64: %v", key_value["_dir_"], err))
continue
}
if y, err := lp.NewMessage("gpfs_num_readdirs", m.tags, m.meta, map[string]interface{}{"value": numReaddirs}, timestamp); err == nil {
output <- y
}
// Number of inode updates
numInodeUpdates, err := strconv.ParseInt(key_value["_iu_"], 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert number of inode updates: '%s' to int: %v", key_value["_iu_"], err))
continue
}
if y, err := lp.NewMessage("gpfs_num_inode_updates", m.tags, m.meta, map[string]interface{}{"value": numInodeUpdates}, timestamp); err == nil {
output <- y
}
// Total values
if m.config.SendTotalValues {
bytesTotal := bytesRead + bytesWritten
if y, err :=
lp.NewMessage("gpfs_bytes_total",
m.tags,
m.meta,
map[string]interface{}{
"value": bytesTotal,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes")
output <- y
}
iops := numReads + numWrites
if y, err :=
lp.NewMessage("gpfs_iops",
m.tags,
m.meta,
map[string]interface{}{
"value": iops,
},
timestamp,
); err == nil {
output <- y
}
metaops := numInodeUpdates + numCloses + numOpens + numReaddirs
if y, err :=
lp.NewMessage("gpfs_metaops",
m.tags,
m.meta,
map[string]interface{}{
"value": metaops,
},
timestamp,
); err == nil {
output <- y
}
// Save new state, if it contains proper values
if len(newstate) > 0 {
m.lastState[filesystem] = newstate
m.lastTimestamp[filesystem] = timestamp
}
}
}

View File

@@ -1,13 +1,31 @@
<!--
---
title: GPFS collector
description: Collect infos about GPFS filesystems
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/gpfs.md
---
-->
## `gpfs` collector
```json
"ibstat": {
"gpfs": {
"mmpmon_path": "/path/to/mmpmon",
"use_sudo": "true",
"exclude_filesystem": [
"fs1"
],
"send_bandwidths": true,
"send_total_values": true
"exclude_metrics": [
"gpfs_bytes_written"
],
"send_abs_values": true,
"send_diff_values": true,
"send_derived_values": true,
"send_total_values": true,
"send_bandwidths": true
}
```
@@ -16,24 +34,50 @@ GPFS / IBM Spectrum Scale filesystems.
The reported filesystems can be filtered with the `exclude_filesystem` option
in the configuration.
Individual metrics can be disabled for reporting using option `exclude_metrics`.
The path to the `mmpmon` command can be configured with the `mmpmon_path` option
in the configuration. If nothing is set, the collector searches in `$PATH` for `mmpmon`.
If cc-metric-collector is run as non-root, password-less `sudo` can be enabled with `use_sudo`.
Because `mmpmon` is by default only executable as root, the Go procedure to
search for it in `$PATH` will fail. If you use `sudo`, you must specify the
complete path for `mmpmon` using the parameter `mmpmon_path`.
Metrics:
* `gpfs_bytes_read`
* `gpfs_bytes_written`
* `gpfs_num_opens`
* `gpfs_num_closes`
* `gpfs_num_reads`
* `gpfs_num_writes`
* `gpfs_num_readdirs`
* `gpfs_num_inode_updates`
* `gpfs_bytes_total = gpfs_bytes_read + gpfs_bytes_written` (if `send_total_values == true`)
* `gpfs_iops = gpfs_num_reads + gpfs_num_writes` (if `send_total_values == true`)
* `gpfs_metaops = gpfs_num_inode_updates + gpfs_num_closes + gpfs_num_opens + gpfs_num_readdirs` (if `send_total_values == true`)
* `gpfs_bw_read` (if `send_bandwidths == true`)
* `gpfs_bw_write` (if `send_bandwidths == true`)
* `gpfs_bytes_read` (if `send_abs_values == true`)
* `gpfs_bytes_written` (if `send_abs_values == true`)
* `gpfs_num_opens` (if `send_abs_values == true`)
* `gpfs_num_closes` (if `send_abs_values == true`)
* `gpfs_num_reads` (if `send_abs_values == true`)
* `gpfs_num_writes` (if `send_abs_values == true`)
* `gpfs_num_readdirs` (if `send_abs_values == true`)
* `gpfs_num_inode_updates` (if `send_abs_values == true`)
* `gpfs_bytes_read_diff` (if `send_diff_values == true`)
* `gpfs_bytes_written_diff` (if `send_diff_values == true`)
* `gpfs_num_opens_diff` (if `send_diff_values == true`)
* `gpfs_num_closes_diff` (if `send_diff_values == true`)
* `gpfs_num_reads_diff` (if `send_diff_values == true`)
* `gpfs_num_writes_diff` (if `send_diff_values == true`)
* `gpfs_num_readdirs_diff` (if `send_diff_values == true`)
* `gpfs_num_inode_updates_diff` (if `send_diff_values == true`)
* `gpfs_bw_read` (if `send_derived_values == true` or `send_bandwidths == true`)
* `gpfs_bw_write` (if `send_derived_values == true` or `send_bandwidths == true`)
* `gpfs_opens_rate` (if `send_derived_values == true`)
* `gpfs_closes_rate` (if `send_derived_values == true`)
* `gpfs_reads_rate` (if `send_derived_values == true`)
* `gpfs_writes_rate` (if `send_derived_values == true`)
* `gpfs_readdirs_rate` (if `send_derived_values == true`)
* `gpfs_inode_updates_rate` (if `send_derived_values == true`)
* `gpfs_bytes_total = gpfs_bytes_read + gpfs_bytes_written` (if `send_total_values == true` and `send_abs_values == true`)
* `gpfs_bytes_total_diff` (if `send_total_values == true` and `send_diff_values == true`)
* `gpfs_bw_total` ((if `send_total_values == true` and `send_derived_values == true`) or `send_bandwidths == true`)
* `gpfs_iops = gpfs_num_reads + gpfs_num_writes` (if `send_total_values == true` and `send_abs_values == true`)
* `gpfs_iops_diff` (if `send_total_values == true` and `send_diff_values == true`)
* `gpfs_iops_rate` (if `send_total_values == true` and `send_derived_values == true`)
* `gpfs_metaops = gpfs_num_inode_updates + gpfs_num_closes + gpfs_num_opens + gpfs_num_readdirs` (if `send_total_values == true` and `send_abs_values == true`)
* `gpfs_metaops_diff` (if `send_total_values == true` and `send_diff_values == true`)
* `gpfs_metaops_rate` (if `send_total_values == true` and `send_derived_values == true`)
The collector adds a `filesystem` tag to all metrics

View File

@@ -1,11 +1,18 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"fmt"
"os"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
"golang.org/x/sys/unix"
"encoding/json"
@@ -58,7 +65,9 @@ func (m *InfinibandCollector) Init(config json.RawMessage) error {
var err error
m.name = "InfinibandCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{
"source": m.name,

View File

@@ -1,3 +1,13 @@
<!--
---
title: InfiniBand Metric collector
description: Collect metrics for InfiniBand devices
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/infiniband.md
---
-->
## `ibstat` collector

View File

@@ -1,30 +1,38 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"os"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// Konstante für den Pfad zu /proc/diskstats
const IOSTATFILE = `/proc/diskstats`
type IOstatCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
// Neues Feld zum Ausschließen von Devices per JSON-Konfiguration
ExcludeDevices []string `json:"exclude_devices,omitempty"`
}
type IOstatCollectorEntry struct {
lastValues map[string]int64
tags map[string]string
currentValues map[string]int64
lastValues map[string]int64
tags map[string]string
}
type IOstatCollector struct {
@@ -39,7 +47,9 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
m.name = "IOstatCollector"
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Disk"}
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
@@ -69,7 +79,7 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
m.devices = make(map[string]IOstatCollectorEntry)
m.matches = make(map[string]int)
for k, v := range matches {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, k); !skip {
if !slices.Contains(m.config.ExcludeMetrics, k) {
m.matches[k] = v
}
}
@@ -78,10 +88,8 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
}
file, err := os.Open(IOSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return err
return fmt.Errorf("%s Init(): Failed to open file \"%s\": %w", m.name, IOSTATFILE, err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
@@ -95,21 +103,36 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
if strings.Contains(device, "loop") {
continue
}
if _, skip := stringArrayContains(m.config.ExcludeDevices, device); skip {
if slices.Contains(m.config.ExcludeDevices, device) {
continue
}
values := make(map[string]int64)
currentValues := make(map[string]int64)
lastValues := make(map[string]int64)
for m := range m.matches {
values[m] = 0
currentValues[m] = 0
lastValues[m] = 0
}
for name, idx := range m.matches {
if idx < len(linefields) {
if value, err := strconv.ParseInt(linefields[idx], 0, 64); err == nil {
currentValues[name] = value
lastValues[name] = value // Set last to current for first read
}
}
}
m.devices[device] = IOstatCollectorEntry{
tags: map[string]string{
"device": device,
"type": "node",
},
lastValues: values,
currentValues: currentValues,
lastValues: lastValues,
}
}
if err := file.Close(); err != nil {
return fmt.Errorf("%s Init(): Failed to close file \"%s\": %w", m.name, IOSTATFILE, err)
}
m.init = true
return err
}
@@ -121,10 +144,18 @@ func (m *IOstatCollector) Read(interval time.Duration, output chan lp.CCMessage)
file, err := os.Open(IOSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to open file '%s': %v", IOSTATFILE, err))
return
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", IOSTATFILE, err))
}
}()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
@@ -140,24 +171,28 @@ func (m *IOstatCollector) Read(interval time.Duration, output chan lp.CCMessage)
if strings.Contains(device, "loop") {
continue
}
if _, skip := stringArrayContains(m.config.ExcludeDevices, device); skip {
if slices.Contains(m.config.ExcludeDevices, device) {
continue
}
if _, ok := m.devices[device]; !ok {
continue
}
// Update current and last values
entry := m.devices[device]
for name, idx := range m.matches {
if idx < len(linefields) {
x, err := strconv.ParseInt(linefields[idx], 0, 64)
if err == nil {
diff := x - entry.lastValues[name]
y, err := lp.NewMessage(name, entry.tags, m.meta, map[string]interface{}{"value": int(diff)}, time.Now())
// Calculate difference using previous current and new value
diff := x - entry.currentValues[name]
y, err := lp.NewMetric(name, entry.tags, m.meta, int(diff), time.Now())
if err == nil {
output <- y
}
// Update last to previous current, and current to new value
entry.lastValues[name] = entry.currentValues[name]
entry.currentValues[name] = x
}
entry.lastValues[name] = x
}
}
m.devices[device] = entry

View File

@@ -1,3 +1,13 @@
<!--
---
title: IOStat Metric collector
description: Collect metrics from `/proc/diskstats`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/iostat.md
---
-->
## `iostat` collector

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -7,14 +14,13 @@ import (
"errors"
"fmt"
"io"
"log"
"os/exec"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const IPMISENSORS_PATH = `ipmi-sensors`
@@ -37,7 +43,9 @@ func (m *IpmiCollector) Init(config json.RawMessage) error {
}
m.name = "IpmiCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{
"source": m.name,
@@ -109,15 +117,16 @@ func (m *IpmiCollector) readIpmiTool(cmd string, output chan lp.CCMessage) {
}
v, err := strconv.ParseFloat(strings.TrimSpace(lv[1]), 64)
if err == nil {
name := strings.ToLower(strings.Replace(strings.TrimSpace(lv[0]), " ", "_", -1))
name := strings.ToLower(strings.ReplaceAll(strings.TrimSpace(lv[0]), " ", "_"))
unit := strings.TrimSpace(lv[2])
if unit == "Volts" {
switch unit {
case "Volts":
unit = "Volts"
} else if unit == "degrees C" {
case "degrees C":
unit = "degC"
} else if unit == "degrees F" {
case "degrees F":
unit = "degF"
} else if unit == "Watts" {
case "Watts":
unit = "Watts"
}
@@ -143,22 +152,29 @@ func (m *IpmiCollector) readIpmiTool(cmd string, output chan lp.CCMessage) {
func (m *IpmiCollector) readIpmiSensors(cmd string, output chan lp.CCMessage) {
// Setup ipmisensors command
command := exec.Command(cmd, "--comma-separated-output", "--sdr-cache-recreate")
command.Wait()
stdout, err := command.Output()
if err != nil {
log.Print(err)
stdout, _ := command.StdoutPipe()
errBuf := new(bytes.Buffer)
command.Stderr = errBuf
// start command
if err := command.Start(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("readIpmiSensors(): Failed to start command \"%s\": %v", command.String(), err),
)
return
}
ll := strings.Split(string(stdout), "\n")
for _, line := range ll {
lv := strings.Split(line, ",")
// Read command output
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
lv := strings.Split(scanner.Text(), ",")
if len(lv) > 3 {
v, err := strconv.ParseFloat(lv[3], 64)
if err == nil {
name := strings.ToLower(strings.Replace(lv[1], " ", "_", -1))
name := strings.ToLower(strings.ReplaceAll(lv[1], " ", "_"))
y, err := lp.NewMessage(name, map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": v}, time.Now())
if err == nil {
if len(lv) > 4 {
@@ -169,6 +185,18 @@ func (m *IpmiCollector) readIpmiSensors(cmd string, output chan lp.CCMessage) {
}
}
}
// Wait for command end
if err := command.Wait(); err != nil {
errMsg, _ := io.ReadAll(errBuf)
cclog.ComponentError(
m.name,
fmt.Sprintf("readIpmiSensors(): Failed to wait for the end of command \"%s\": %v\n", command.String(), err),
)
cclog.ComponentError(m.name, fmt.Sprintf("readIpmiSensors(): command stderr: \"%s\"\n", strings.TrimSpace(string(errMsg))))
return
}
}
func (m *IpmiCollector) Read(interval time.Duration, output chan lp.CCMessage) {

View File

@@ -1,3 +1,13 @@
<!--
---
title: IPMI Metric collector
description: Collect metrics using ipmitool or ipmi-sensors
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/ipmi.md
---
-->
## `ipmistat` collector

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
/*
@@ -12,6 +19,7 @@ import (
"encoding/json"
"errors"
"fmt"
"maps"
"math"
"os"
"os/signal"
@@ -24,8 +32,8 @@ import (
"time"
"unsafe"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
agg "github.com/ClusterCockpit/cc-metric-collector/internal/metricAggregator"
topo "github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
"github.com/NVIDIA/go-nvml/pkg/dl"
@@ -180,7 +188,7 @@ func getBaseFreq() float64 {
for _, f := range files {
buffer, err := os.ReadFile(f)
if err == nil {
data := strings.Replace(string(buffer), "\n", "", -1)
data := strings.ReplaceAll(string(buffer), "\n", "")
x, err := strconv.ParseInt(data, 0, 64)
if err == nil {
freq = float64(x)
@@ -190,12 +198,8 @@ func getBaseFreq() float64 {
}
if math.IsNaN(freq) {
C.power_init(0)
info := C.get_powerInfo()
if float64(info.baseFrequency) != 0 {
freq = float64(info.baseFrequency)
}
C.power_finalize()
C.timer_init()
freq = float64(C.timer_getCycleClock()) / 1e3
}
return freq * 1e3
}
@@ -227,9 +231,13 @@ func (m *LikwidCollector) Init(config json.RawMessage) error {
if m.config.ForceOverwrite {
cclog.ComponentDebug(m.name, "Set LIKWID_FORCE=1")
os.Setenv("LIKWID_FORCE", "1")
if err := os.Setenv("LIKWID_FORCE", "1"); err != nil {
return fmt.Errorf("error setting environment variable LIKWID_FORCE=1: %v", err)
}
}
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.setup()
m.meta = map[string]string{"group": "PerfCounter"}
cclog.ComponentDebug(m.name, "Get cpulist and init maps and lists")
@@ -313,7 +321,14 @@ func (m *LikwidCollector) Init(config json.RawMessage) error {
case "accessdaemon":
if len(m.config.DaemonPath) > 0 {
p := os.Getenv("PATH")
os.Setenv("PATH", m.config.DaemonPath+":"+p)
if len(p) > 0 {
p = m.config.DaemonPath + ":" + p
} else {
p = m.config.DaemonPath
}
if err := os.Setenv("PATH", p); err != nil {
return fmt.Errorf("error setting environment variable PATH=%s: %v", p, err)
}
}
C.HPMmode(1)
retCode := C.HPMinit()
@@ -372,10 +387,18 @@ func (m *LikwidCollector) takeMeasurement(evidx int, evset LikwidEventsetConfig,
// Watch changes for the lock file ()
watcher, err := fsnotify.NewWatcher()
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("takeMeasurement(): Failed to create a new fsnotify.Watcher: %v", err))
return true, err
}
defer watcher.Close()
defer func() {
if err := watcher.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("takeMeasurement(): Failed to close fsnotify.Watcher: %v", err))
}
}()
if len(m.config.LockfilePath) > 0 {
// Check if the lock file exists
info, err := os.Stat(m.config.LockfilePath)
@@ -385,7 +408,9 @@ func (m *LikwidCollector) takeMeasurement(evidx int, evset LikwidEventsetConfig,
if createErr != nil {
return true, fmt.Errorf("failed to create lock file: %v", createErr)
}
file.Close()
if err := file.Close(); err != nil {
return true, fmt.Errorf("failed to close lock file: %v", err)
}
info, err = os.Stat(m.config.LockfilePath) // Recheck the file after creation
}
if err != nil {
@@ -745,9 +770,7 @@ func (m *LikwidCollector) calcGlobalMetrics(groups []LikwidEventsetConfig, inter
// Here we generate parameter list
params := make(map[string]float64)
for _, evset := range groups {
for mname, mres := range evset.metrics[tid] {
params[mname] = mres
}
maps.Copy(params, evset.metrics[tid])
}
params["gotime"] = interval.Seconds()
// Evaluate the metric
@@ -810,13 +833,21 @@ func (m *LikwidCollector) ReadThread(interval time.Duration, output chan lp.CCMe
if !skip {
// read measurements and derive event set metrics
m.calcEventsetMetrics(e, interval, output)
err = m.calcEventsetMetrics(e, interval, output)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return
}
groups = append(groups, e)
}
}
if len(groups) > 0 {
// calculate global metrics
m.calcGlobalMetrics(groups, interval, output)
err = m.calcGlobalMetrics(groups, interval, output)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return
}
}
}

View File

@@ -1,3 +1,13 @@
<!--
---
title: LIKWID collector
description: Collect hardware performance events and metrics using LIKWID
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/likwid.md
---
-->
## `likwid` collector

View File

@@ -1,15 +1,23 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"fmt"
"os"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// LoadavgCollector collects:
@@ -35,7 +43,9 @@ type LoadavgCollector struct {
func (m *LoadavgCollector) Init(config json.RawMessage) error {
m.name = "LoadavgCollector"
m.parallel = true
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
@@ -57,10 +67,10 @@ func (m *LoadavgCollector) Init(config json.RawMessage) error {
m.proc_skips = make([]bool, len(m.proc_matches))
for i, name := range m.load_matches {
_, m.load_skips[i] = stringArrayContains(m.config.ExcludeMetrics, name)
m.load_skips[i] = slices.Contains(m.config.ExcludeMetrics, name)
}
for i, name := range m.proc_matches {
_, m.proc_skips[i] = stringArrayContains(m.config.ExcludeMetrics, name)
m.proc_skips[i] = slices.Contains(m.config.ExcludeMetrics, name)
}
m.init = true
return nil

View File

@@ -1,3 +1,14 @@
<!--
---
title: Load average metric collector
description: Collect metrics from `/proc/loadavg`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/loadavg.md
---
-->
## `loadavg` collector

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -6,12 +13,13 @@ import (
"fmt"
"os/exec"
"os/user"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const LUSTRE_SYSFS = `/sys/fs/lustre`
@@ -54,7 +62,6 @@ func (m *LustreCollector) getDeviceDataCommand(device string) []string {
} else {
command = exec.Command(m.lctl, LCTL_OPTION, statsfile)
}
command.Wait()
stdout, _ := command.Output()
return strings.Split(string(stdout), "\n")
}
@@ -295,7 +302,9 @@ func (m *LustreCollector) Init(config json.RawMessage) error {
return err
}
}
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.tags = map[string]string{"type": "node"}
m.meta = map[string]string{"source": m.name, "group": "Lustre"}
@@ -332,21 +341,21 @@ func (m *LustreCollector) Init(config json.RawMessage) error {
m.definitions = []LustreMetricDefinition{}
if m.config.SendAbsoluteValues {
for _, def := range LustreAbsMetrics {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, def.name); !skip {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
}
if m.config.SendDiffValues {
for _, def := range LustreDiffMetrics {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, def.name); !skip {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}
}
if m.config.SendDerivedValues {
for _, def := range LustreDeriveMetrics {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, def.name); !skip {
if !slices.Contains(m.config.ExcludeMetrics, def.name) {
m.definitions = append(m.definitions, def)
}
}

View File

@@ -1,3 +1,14 @@
<!--
---
title: Lustre filesystem metric collector
description: Collect metrics for Lustre filesystems
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/lustre.md
---
-->
## `lustrestat` collector
@@ -43,4 +54,4 @@ Metrics:
* `lustre_statfs_diff` (if `send_diff_values == true`)
* `lustre_inode_permission_diff` (if `send_diff_values == true`)
This collector adds an `device` tag.
This collector adds an `device` tag.

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -8,12 +15,13 @@ import (
"os"
"path/filepath"
"regexp"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const MEMSTATFILE = "/proc/meminfo"
@@ -51,7 +59,11 @@ func getStats(filename string) map[string]MemstatStats {
if err != nil {
cclog.Error(err.Error())
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.Error(err.Error())
}
}()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
@@ -108,19 +120,20 @@ func (m *MemstatCollector) Init(config json.RawMessage) error {
"MemShared": "mem_shared",
}
for k, v := range matches {
_, skip := stringArrayContains(m.config.ExcludeMetrics, k)
if !skip {
if !slices.Contains(m.config.ExcludeMetrics, k) {
m.matches[k] = v
}
}
m.sendMemUsed = false
if _, skip := stringArrayContains(m.config.ExcludeMetrics, "mem_used"); !skip {
if !slices.Contains(m.config.ExcludeMetrics, "mem_used") {
m.sendMemUsed = true
}
if len(m.matches) == 0 {
return errors.New("no metrics to collect")
}
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if m.config.NodeStats {
if stats := getStats(MEMSTATFILE); len(stats) == 0 {
@@ -167,7 +180,7 @@ func (m *MemstatCollector) Read(interval time.Duration, output chan lp.CCMessage
sendStats := func(stats map[string]MemstatStats, tags map[string]string) {
for match, name := range m.matches {
var value float64 = 0
var unit string = ""
unit := ""
if v, ok := stats[match]; ok {
value = v.value
if len(v.unit) > 0 {

View File

@@ -1,3 +1,14 @@
<!--
---
title: Memory statistics metric collector
description: Collect metrics from `/proc/meminfo`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/memstat.md
---
-->
## `memstat` collector

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -5,7 +12,7 @@ import (
"fmt"
"time"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
type MetricCollector interface {
@@ -44,30 +51,6 @@ func (c *metricCollector) Initialized() bool {
return c.init
}
// intArrayContains scans an array of ints if the value str is present in the array
// If the specified value is found, the corresponding array index is returned.
// The bool value is used to signal success or failure
func intArrayContains(array []int, str int) (int, bool) {
for i, a := range array {
if a == str {
return i, true
}
}
return -1, false
}
// stringArrayContains scans an array of strings if the value str is present in the array
// If the specified value is found, the corresponding array index is returned.
// The bool value is used to signal success or failure
func stringArrayContains(array []string, str string) (int, bool) {
for i, a := range array {
if a == str {
return i, true
}
}
return -1, false
}
// RemoveFromStringList removes the string r from the array of strings s
// If r is not contained in the array an error is returned
func RemoveFromStringList(s []string, r string) ([]string, error) {

View File

@@ -1,16 +1,24 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"os"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const NETSTATFILE = "/proc/net/dev"
@@ -58,7 +66,9 @@ func getCanonicalName(raw string, aliasToCanonical map[string]string) string {
func (m *NetstatCollector) Init(config json.RawMessage) error {
m.name = "NetstatCollector"
m.parallel = true
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.lastTimestamp = time.Now()
const (
@@ -100,10 +110,8 @@ func (m *NetstatCollector) Init(config json.RawMessage) error {
// Check access to net statistic file
file, err := os.Open(NETSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return err
return fmt.Errorf("%s Init(): failed to open netstat file \"%s\": %w", m.name, NETSTATFILE, err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
@@ -122,7 +130,7 @@ func (m *NetstatCollector) Init(config json.RawMessage) error {
canonical := getCanonicalName(raw, m.aliasToCanonical)
// Check if device is a included device
if _, ok := stringArrayContains(m.config.IncludeDevices, canonical); ok {
if slices.Contains(m.config.IncludeDevices, canonical) {
// Tag will contain original device name (raw).
tags := map[string]string{"stype": "network", "stype-id": raw, "type": "node"}
meta_unit_byte := map[string]string{"source": m.name, "group": "Network", "unit": "bytes"}
@@ -167,8 +175,13 @@ func (m *NetstatCollector) Init(config json.RawMessage) error {
}
}
// Close netstat file
if err := file.Close(); err != nil {
return fmt.Errorf("%s Init(): failed to close netstat file \"%s\": %w", m.name, NETSTATFILE, err)
}
if len(m.matches) == 0 {
return errors.New("no devices to collector metrics found")
return fmt.Errorf("%s Init(): no devices to collect metrics found", m.name)
}
m.init = true
return nil
@@ -187,10 +200,18 @@ func (m *NetstatCollector) Read(interval time.Duration, output chan lp.CCMessage
file, err := os.Open(NETSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to open file '%s': %v", NETSTATFILE, err))
return
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", NETSTATFILE, err))
}
}()
scanner := bufio.NewScanner(file)
for scanner.Scan() {

View File

@@ -1,3 +1,13 @@
<!--
---
title: Network device metric collector
description: Collect metrics for network devices through procfs
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/netstat.md
---
-->
## `netstat` collector
@@ -28,4 +38,4 @@ Metrics:
* `net_pkts_in_bw` (`unit=packets/sec` if `send_derived_values == true`)
* `net_pkts_out_bw` (`unit=packets/sec` if `send_derived_values == true`)
The device name is added as tag `stype=network,stype-id=<device>`.
The device name is added as tag `stype=network,stype-id=<device>`.

View File

@@ -1,3 +1,14 @@
<!--
---
title: NFS network filesystem (v3) metric collector
description: Collect metrics for NFS network filesystems in version 3
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/nfs3.md
---
-->
## `nfs3stat` collector

View File

@@ -1,3 +1,14 @@
<!--
---
title: NFS network filesystem (v4) metric collector
description: Collect metrics for NFS network filesystems in version 4
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/nfs4.md
---
-->
## `nfs4stat` collector

View File

@@ -1,9 +1,17 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"fmt"
"log"
"slices"
// "os"
"os/exec"
@@ -11,7 +19,8 @@ import (
"strings"
"time"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// First part contains the code for the general NfsCollector.
@@ -37,10 +46,15 @@ type nfsCollector struct {
func (m *nfsCollector) initStats() error {
cmd := exec.Command(m.config.Nfsstats, `-l`, `--all`)
cmd.Wait()
// Wait for cmd end
if err := cmd.Wait(); err != nil {
return fmt.Errorf("initStats(): %w", err)
}
buffer, err := cmd.Output()
if err == nil {
for _, line := range strings.Split(string(buffer), "\n") {
for line := range strings.Lines(string(buffer)) {
lf := strings.Fields(line)
if len(lf) != 5 {
continue
@@ -64,10 +78,15 @@ func (m *nfsCollector) initStats() error {
func (m *nfsCollector) updateStats() error {
cmd := exec.Command(m.config.Nfsstats, `-l`, `--all`)
cmd.Wait()
// Wait for cmd end
if err := cmd.Wait(); err != nil {
return fmt.Errorf("updateStats(): %w", err)
}
buffer, err := cmd.Output()
if err == nil {
for _, line := range strings.Split(string(buffer), "\n") {
for line := range strings.Lines(string(buffer)) {
lf := strings.Fields(line)
if len(lf) != 5 {
continue
@@ -112,7 +131,9 @@ func (m *nfsCollector) MainInit(config json.RawMessage) error {
return fmt.Errorf("NfsCollector.Init(): Failed to find nfsstat binary '%s': %v", m.config.Nfsstats, err)
}
m.data = make(map[string]NfsCollectorData)
m.initStats()
if err := m.initStats(); err != nil {
return fmt.Errorf("NfsCollector.Init(): %w", err)
}
m.init = true
m.parallel = true
return nil
@@ -124,7 +145,13 @@ func (m *nfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
}
timestamp := time.Now()
m.updateStats()
if err := m.updateStats(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): updateStats() failed: %v", err),
)
return
}
prefix := ""
switch m.version {
case "v3":
@@ -136,7 +163,7 @@ func (m *nfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
}
for name, data := range m.data {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, name); skip {
if slices.Contains(m.config.ExcludeMetrics, name) {
continue
}
value := data.current - data.last
@@ -163,13 +190,17 @@ type Nfs4Collector struct {
func (m *Nfs3Collector) Init(config json.RawMessage) error {
m.name = "Nfs3Collector"
m.version = `v3`
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
return m.MainInit(config)
}
func (m *Nfs4Collector) Init(config json.RawMessage) error {
m.name = "Nfs4Collector"
m.version = `v4`
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
return m.MainInit(config)
}

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -5,12 +12,13 @@ import (
"fmt"
"os"
"regexp"
"slices"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// These are the fields we read from the JSON configuration
@@ -64,7 +72,7 @@ func (m *NfsIOStatCollector) readNfsiostats() map[string]map[string]int64 {
// Is this a device line with mount point, remote target and NFS version?
dev := resolve_regex_fields(l, deviceRegex)
if len(dev) > 0 {
if _, ok := stringArrayContains(m.config.ExcludeFilesystem, dev[m.key]); !ok {
if !slices.Contains(m.config.ExcludeFilesystem, dev[m.key]) {
current = dev
if len(current["version"]) == 0 {
current["version"] = "3"
@@ -78,7 +86,7 @@ func (m *NfsIOStatCollector) readNfsiostats() map[string]map[string]int64 {
if len(bytes) > 0 {
data[current[m.key]] = make(map[string]int64)
for name, sval := range bytes {
if _, ok := stringArrayContains(m.config.ExcludeMetrics, name); !ok {
if !slices.Contains(m.config.ExcludeMetrics, name) {
val, err := strconv.ParseInt(sval, 10, 64)
if err == nil {
data[current[m.key]][name] = val
@@ -95,7 +103,9 @@ func (m *NfsIOStatCollector) readNfsiostats() map[string]map[string]int64 {
func (m *NfsIOStatCollector) Init(config json.RawMessage) error {
var err error = nil
m.name = "NfsIOStatCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "NFS", "unit": "bytes"}
m.tags = map[string]string{"type": "node"}
@@ -171,7 +181,7 @@ func (m *NfsIOStatCollector) Read(interval time.Duration, output chan lp.CCMessa
}
}
if !found {
m.data[mntpoint] = nil
delete(m.data, mntpoint)
}
}
}

View File

@@ -1,3 +1,14 @@
<!--
---
title: NFS network filesystem metrics from procfs
description: Collect NFS network filesystem metrics for mounts from `/proc/self/mountstats`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/nfsio.md
---
-->
## `nfsiostat` collector
```json

View File

@@ -10,8 +10,8 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
type NUMAStatsCollectorConfig struct {
@@ -72,12 +72,22 @@ func (m *NUMAStatsCollector) Init(config json.RawMessage) error {
m.name = "NUMAStatsCollector"
m.parallel = true
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.meta = map[string]string{
"source": m.name,
"group": "NUMA",
}
m.config.SendAbsoluteValues = true
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
return fmt.Errorf("unable to unmarshal numastat configuration: %s", err.Error())
}
}
// Loop for all NUMA node directories
base := "/sys/devices/system/node/node"
globPattern := base + "[0-9]*"
@@ -94,8 +104,11 @@ func (m *NUMAStatsCollector) Init(config json.RawMessage) error {
file := filepath.Join(dir, "numastat")
m.topology = append(m.topology,
NUMAStatsCollectorTopolgy{
file: file,
tagSet: map[string]string{"memoryDomain": node},
file: file,
tagSet: map[string]string{
"type": "memoryDomain",
"type-id": node,
},
previousValues: make(map[string]int64),
})
}
@@ -145,11 +158,11 @@ func (m *NUMAStatsCollector) Read(interval time.Duration, output chan lp.CCMessa
}
if m.config.SendAbsoluteValues {
msg, err := lp.NewMessage(
msg, err := lp.NewMetric(
"numastats_"+key,
t.tagSet,
m.meta,
map[string]interface{}{"value": value},
value,
now,
)
if err == nil {
@@ -161,11 +174,11 @@ func (m *NUMAStatsCollector) Read(interval time.Duration, output chan lp.CCMessa
prev, ok := t.previousValues[key]
if ok {
rate := float64(value-prev) / timeDiff
msg, err := lp.NewMessage(
msg, err := lp.NewMetric(
"numastats_"+key+"_rate",
t.tagSet,
m.meta,
map[string]interface{}{"value": rate},
rate,
now,
)
if err == nil {
@@ -175,7 +188,11 @@ func (m *NUMAStatsCollector) Read(interval time.Duration, output chan lp.CCMessa
t.previousValues[key] = value
}
}
file.Close()
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", t.file, err))
}
}
}

View File

@@ -1,3 +1,13 @@
<!--
---
title: NUMAStat collector
description: Collect infos about NUMA domains
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/numastat.md
---
-->
## `numastat` collector

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -5,11 +12,13 @@ import (
"errors"
"fmt"
"log"
"maps"
"slices"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
"github.com/NVIDIA/go-nvml/pkg/nvml"
)
@@ -27,10 +36,12 @@ type NvidiaCollectorConfig struct {
}
type NvidiaCollectorDevice struct {
device nvml.Device
excludeMetrics map[string]bool
tags map[string]string
meta map[string]string
device nvml.Device
excludeMetrics map[string]bool
tags map[string]string
meta map[string]string
lastEnergyReading uint64
lastEnergyTimestamp time.Time
}
type NvidiaCollector struct {
@@ -55,7 +66,9 @@ func (m *NvidiaCollector) Init(config json.RawMessage) error {
m.config.ProcessMigDevices = false
m.config.UseUuidForMigDevices = false
m.config.UseSliceForMigDevices = false
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
@@ -100,7 +113,7 @@ func (m *NvidiaCollector) Init(config json.RawMessage) error {
// Skip excluded devices by ID
str_i := fmt.Sprintf("%d", i)
if _, skip := stringArrayContains(m.config.ExcludeDevices, str_i); skip {
if slices.Contains(m.config.ExcludeDevices, str_i) {
cclog.ComponentDebug(m.name, "Skipping excluded device", str_i)
continue
}
@@ -128,7 +141,7 @@ func (m *NvidiaCollector) Init(config json.RawMessage) error {
pciInfo.Device)
// Skip excluded devices specified by PCI ID
if _, skip := stringArrayContains(m.config.ExcludeDevices, pci_id); skip {
if slices.Contains(m.config.ExcludeDevices, pci_id) {
cclog.ComponentDebug(m.name, "Skipping excluded device", pci_id)
continue
}
@@ -149,6 +162,8 @@ func (m *NvidiaCollector) Init(config json.RawMessage) error {
// Add device handle
g.device = device
g.lastEnergyReading = 0
g.lastEnergyTimestamp = time.Now()
// Add tags
g.tags = map[string]string{
@@ -206,18 +221,20 @@ func (m *NvidiaCollector) Init(config json.RawMessage) error {
return nil
}
func readMemoryInfo(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readMemoryInfo(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_fb_mem_total"] || !device.excludeMetrics["nv_fb_mem_used"] || !device.excludeMetrics["nv_fb_mem_reserved"] {
var total uint64
var used uint64
var reserved uint64 = 0
var v2 bool = false
v2 := false
meminfo, ret := nvml.DeviceGetMemoryInfo(device.device)
if ret != nvml.SUCCESS {
err := errors.New(nvml.ErrorString(ret))
return err
}
// Total physical device memory (in bytes)
total = meminfo.Total
// Sum of Reserved and Allocated device memory (in bytes)
used = meminfo.Used
if !device.excludeMetrics["nv_fb_mem_total"] {
@@ -250,7 +267,7 @@ func readMemoryInfo(device NvidiaCollectorDevice, output chan lp.CCMessage) erro
return nil
}
func readBarMemoryInfo(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readBarMemoryInfo(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_bar1_mem_total"] || !device.excludeMetrics["nv_bar1_mem_used"] {
meminfo, ret := nvml.DeviceGetBAR1MemoryInfo(device.device)
if ret != nvml.SUCCESS {
@@ -277,7 +294,7 @@ func readBarMemoryInfo(device NvidiaCollectorDevice, output chan lp.CCMessage) e
return nil
}
func readUtilization(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readUtilization(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
isMig, ret := nvml.DeviceIsMigDeviceHandle(device.device)
if ret != nvml.SUCCESS {
err := errors.New(nvml.ErrorString(ret))
@@ -319,7 +336,7 @@ func readUtilization(device NvidiaCollectorDevice, output chan lp.CCMessage) err
return nil
}
func readTemp(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readTemp(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_temp"] {
// Retrieves the current temperature readings for the device, in degrees C.
//
@@ -338,7 +355,7 @@ func readTemp(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
return nil
}
func readFan(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readFan(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_fan"] {
// Retrieves the intended operating speed of the device's fan.
//
@@ -361,7 +378,7 @@ func readFan(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
return nil
}
// func readFans(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
// func readFans(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
// if !device.excludeMetrics["nv_fan"] {
// numFans, ret := nvml.DeviceGetNumFans(device.device)
// if ret == nvml.SUCCESS {
@@ -382,7 +399,7 @@ func readFan(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
// return nil
// }
func readEccMode(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readEccMode(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_ecc_mode"] {
// Retrieves the current and pending ECC modes for the device.
//
@@ -392,7 +409,8 @@ func readEccMode(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
// Changing ECC modes requires a reboot.
// The "pending" ECC mode refers to the target mode following the next reboot.
_, ecc_pend, ret := nvml.DeviceGetEccMode(device.device)
if ret == nvml.SUCCESS {
switch ret {
case nvml.SUCCESS:
var y lp.CCMessage
var err error
switch ecc_pend {
@@ -406,7 +424,7 @@ func readEccMode(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
if err == nil {
output <- y
}
} else if ret == nvml.ERROR_NOT_SUPPORTED {
case nvml.ERROR_NOT_SUPPORTED:
y, err := lp.NewMessage("nv_ecc_mode", device.tags, device.meta, map[string]interface{}{"value": "N/A"}, time.Now())
if err == nil {
output <- y
@@ -416,7 +434,7 @@ func readEccMode(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
return nil
}
func readPerfState(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readPerfState(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_perf_state"] {
// Retrieves the current performance state for the device.
//
@@ -436,13 +454,16 @@ func readPerfState(device NvidiaCollectorDevice, output chan lp.CCMessage) error
return nil
}
func readPowerUsage(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readPowerUsage(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_power_usage"] {
// Retrieves power usage for this GPU in milliwatts and its associated circuitry (e.g. memory)
//
// On Fermi and Kepler GPUs the reading is accurate to within +/- 5% of current power draw.
// On Ampere (except GA100) or newer GPUs, the API returns power averaged over 1 sec interval.
// On GA100 and older architectures, instantaneous power is returned.
//
// It is only available if power management mode is supported
// It is only available if power management mode is supported.
mode, ret := nvml.DeviceGetPowerManagementMode(device.device)
if ret != nvml.SUCCESS {
return nil
@@ -461,7 +482,54 @@ func readPowerUsage(device NvidiaCollectorDevice, output chan lp.CCMessage) erro
return nil
}
func readClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readEnergyConsumption(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
// Retrieves total energy consumption for this GPU in millijoules (mJ) since the driver was last reloaded
// For Volta or newer fully supported devices.
if (!device.excludeMetrics["nv_energy"]) && (!device.excludeMetrics["nv_energy_abs"]) && (!device.excludeMetrics["nv_average_power"]) {
now := time.Now()
mode, ret := nvml.DeviceGetPowerManagementMode(device.device)
if ret != nvml.SUCCESS {
return nil
}
if mode == nvml.FEATURE_ENABLED {
energy, ret := nvml.DeviceGetTotalEnergyConsumption(device.device)
if ret == nvml.SUCCESS {
if device.lastEnergyReading != 0 {
if !device.excludeMetrics["nv_energy"] {
y, err := lp.NewMetric("nv_energy", device.tags, device.meta, (energy-device.lastEnergyReading)/1000, now)
if err == nil {
y.AddMeta("unit", "Joules")
output <- y
}
}
if !device.excludeMetrics["nv_average_power"] {
energyDiff := (energy - device.lastEnergyReading) / 1000
timeDiff := now.Sub(device.lastEnergyTimestamp)
y, err := lp.NewMetric("nv_average_power", device.tags, device.meta, energyDiff/uint64(timeDiff.Seconds()), now)
if err == nil {
y.AddMeta("unit", "watts")
output <- y
}
}
}
if !device.excludeMetrics["nv_energy_abs"] {
y, err := lp.NewMetric("nv_energy_abs", device.tags, device.meta, energy/1000, now)
if err == nil {
y.AddMeta("unit", "Joules")
output <- y
}
}
device.lastEnergyReading = energy
device.lastEnergyTimestamp = time.Now()
}
}
}
return nil
}
func readClocks(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
// Retrieves the current clock speeds for the device.
//
// Available clock information:
@@ -513,7 +581,7 @@ func readClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
return nil
}
func readMaxClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readMaxClocks(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
// Retrieves the maximum clock speeds for the device.
//
// Available clock information:
@@ -528,7 +596,7 @@ func readMaxClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error
if !device.excludeMetrics["nv_max_graphics_clock"] {
max_gclk, ret := nvml.DeviceGetMaxClockInfo(device.device, nvml.CLOCK_GRAPHICS)
if ret == nvml.SUCCESS {
y, err := lp.NewMessage("nv_max_graphics_clock", device.tags, device.meta, map[string]interface{}{"value": float64(max_gclk)}, time.Now())
y, err := lp.NewMetric("nv_max_graphics_clock", device.tags, device.meta, float64(max_gclk), time.Now())
if err == nil {
y.AddMeta("unit", "MHz")
output <- y
@@ -537,9 +605,9 @@ func readMaxClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error
}
if !device.excludeMetrics["nv_max_sm_clock"] {
maxSmClock, ret := nvml.DeviceGetClockInfo(device.device, nvml.CLOCK_SM)
maxSmClock, ret := nvml.DeviceGetMaxClockInfo(device.device, nvml.CLOCK_SM)
if ret == nvml.SUCCESS {
y, err := lp.NewMessage("nv_max_sm_clock", device.tags, device.meta, map[string]interface{}{"value": float64(maxSmClock)}, time.Now())
y, err := lp.NewMetric("nv_max_sm_clock", device.tags, device.meta, float64(maxSmClock), time.Now())
if err == nil {
y.AddMeta("unit", "MHz")
output <- y
@@ -548,9 +616,9 @@ func readMaxClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error
}
if !device.excludeMetrics["nv_max_mem_clock"] {
maxMemClock, ret := nvml.DeviceGetClockInfo(device.device, nvml.CLOCK_MEM)
maxMemClock, ret := nvml.DeviceGetMaxClockInfo(device.device, nvml.CLOCK_MEM)
if ret == nvml.SUCCESS {
y, err := lp.NewMessage("nv_max_mem_clock", device.tags, device.meta, map[string]interface{}{"value": float64(maxMemClock)}, time.Now())
y, err := lp.NewMetric("nv_max_mem_clock", device.tags, device.meta, float64(maxMemClock), time.Now())
if err == nil {
y.AddMeta("unit", "MHz")
output <- y
@@ -559,9 +627,9 @@ func readMaxClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error
}
if !device.excludeMetrics["nv_max_video_clock"] {
maxMemClock, ret := nvml.DeviceGetClockInfo(device.device, nvml.CLOCK_VIDEO)
maxVideoClock, ret := nvml.DeviceGetMaxClockInfo(device.device, nvml.CLOCK_VIDEO)
if ret == nvml.SUCCESS {
y, err := lp.NewMessage("nv_max_video_clock", device.tags, device.meta, map[string]interface{}{"value": float64(maxMemClock)}, time.Now())
y, err := lp.NewMetric("nv_max_video_clock", device.tags, device.meta, float64(maxVideoClock), time.Now())
if err == nil {
y.AddMeta("unit", "MHz")
output <- y
@@ -571,7 +639,7 @@ func readMaxClocks(device NvidiaCollectorDevice, output chan lp.CCMessage) error
return nil
}
func readEccErrors(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readEccErrors(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_ecc_uncorrected_error"] {
// Retrieves the total ECC error counts for the device.
//
@@ -602,7 +670,7 @@ func readEccErrors(device NvidiaCollectorDevice, output chan lp.CCMessage) error
return nil
}
func readPowerLimit(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readPowerLimit(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_power_max_limit"] {
// Retrieves the power management limit associated with this device.
//
@@ -622,7 +690,7 @@ func readPowerLimit(device NvidiaCollectorDevice, output chan lp.CCMessage) erro
return nil
}
func readEncUtilization(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readEncUtilization(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
isMig, ret := nvml.DeviceIsMigDeviceHandle(device.device)
if ret != nvml.SUCCESS {
err := errors.New(nvml.ErrorString(ret))
@@ -649,7 +717,7 @@ func readEncUtilization(device NvidiaCollectorDevice, output chan lp.CCMessage)
return nil
}
func readDecUtilization(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readDecUtilization(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
isMig, ret := nvml.DeviceIsMigDeviceHandle(device.device)
if ret != nvml.SUCCESS {
err := errors.New(nvml.ErrorString(ret))
@@ -676,7 +744,7 @@ func readDecUtilization(device NvidiaCollectorDevice, output chan lp.CCMessage)
return nil
}
func readRemappedRows(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readRemappedRows(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_remapped_rows_corrected"] ||
!device.excludeMetrics["nv_remapped_rows_uncorrected"] ||
!device.excludeMetrics["nv_remapped_rows_pending"] ||
@@ -705,7 +773,7 @@ func readRemappedRows(device NvidiaCollectorDevice, output chan lp.CCMessage) er
}
}
if !device.excludeMetrics["nv_remapped_rows_pending"] {
var p int = 0
p := 0
if pending {
p = 1
}
@@ -715,7 +783,7 @@ func readRemappedRows(device NvidiaCollectorDevice, output chan lp.CCMessage) er
}
}
if !device.excludeMetrics["nv_remapped_rows_failure"] {
var f int = 0
f := 0
if failure {
f = 1
}
@@ -729,7 +797,7 @@ func readRemappedRows(device NvidiaCollectorDevice, output chan lp.CCMessage) er
return nil
}
func readProcessCounts(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readProcessCounts(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
if !device.excludeMetrics["nv_compute_processes"] {
// Get information about processes with a compute context on a device
//
@@ -821,7 +889,7 @@ func readProcessCounts(device NvidiaCollectorDevice, output chan lp.CCMessage) e
return nil
}
func readViolationStats(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readViolationStats(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
var violTime nvml.ViolationTime
var ret nvml.Return
@@ -935,7 +1003,7 @@ func readViolationStats(device NvidiaCollectorDevice, output chan lp.CCMessage)
return nil
}
func readNVLinkStats(device NvidiaCollectorDevice, output chan lp.CCMessage) error {
func readNVLinkStats(device *NvidiaCollectorDevice, output chan lp.CCMessage) error {
// Retrieves the specified error counter value
// Please refer to \a nvmlNvLinkErrorCounter_t for error counters that are available
//
@@ -1070,7 +1138,7 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
return
}
readAll := func(device NvidiaCollectorDevice, output chan lp.CCMessage) {
readAll := func(device *NvidiaCollectorDevice, output chan lp.CCMessage) {
name, ret := nvml.DeviceGetName(device.device)
if ret != nvml.SUCCESS {
name = "NoName"
@@ -1110,6 +1178,11 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
cclog.ComponentDebug(m.name, "readPowerUsage for device", name, "failed")
}
err = readEnergyConsumption(device, output)
if err != nil {
cclog.ComponentDebug(m.name, "readEnergyConsumption for device", name, "failed")
}
err = readClocks(device, output)
if err != nil {
cclog.ComponentDebug(m.name, "readClocks for device", name, "failed")
@@ -1169,7 +1242,7 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
// Actual read loop over all attached Nvidia GPUs
for i := 0; i < m.num_gpus; i++ {
readAll(m.gpus[i], output)
readAll(&m.gpus[i], output)
// Iterate over all MIG devices if any
if m.config.ProcessMigDevices {
@@ -1207,9 +1280,7 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
meta: map[string]string{},
excludeMetrics: excludeMetrics,
}
for k, v := range m.gpus[i].tags {
migDevice.tags[k] = v
}
maps.Copy(migDevice.tags, m.gpus[i].tags)
migDevice.tags["stype"] = "mig"
if m.config.UseUuidForMigDevices {
uuid, ret := nvml.DeviceGetUUID(mdev)
@@ -1223,8 +1294,8 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
if ret == nvml.SUCCESS {
mname, ret := nvml.DeviceGetName(mdev)
if ret == nvml.SUCCESS {
x := strings.Replace(mname, name, "", -1)
x = strings.Replace(x, "MIG", "", -1)
x := strings.ReplaceAll(mname, name, "")
x = strings.ReplaceAll(x, "MIG", "")
x = strings.TrimSpace(x)
migDevice.tags["stype-id"] = x
}
@@ -1233,9 +1304,7 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
if _, ok := migDevice.tags["stype-id"]; !ok {
migDevice.tags["stype-id"] = fmt.Sprintf("%d", j)
}
for k, v := range m.gpus[i].meta {
migDevice.meta[k] = v
}
maps.Copy(migDevice.meta, m.gpus[i].meta)
if _, ok := migDevice.meta["uuid"]; ok && !m.config.UseUuidForMigDevices {
uuid, ret := nvml.DeviceGetUUID(mdev)
if ret == nvml.SUCCESS {
@@ -1243,7 +1312,7 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
}
}
readAll(migDevice, output)
readAll(&migDevice, output)
}
}
}
@@ -1251,7 +1320,9 @@ func (m *NvidiaCollector) Read(interval time.Duration, output chan lp.CCMessage)
func (m *NvidiaCollector) Close() {
if m.init {
nvml.Shutdown()
if ret := nvml.Shutdown(); ret != nvml.SUCCESS {
cclog.ComponentError(m.name, "nvml.Shutdown() not successful")
}
m.init = false
}
}

View File

@@ -1,3 +1,13 @@
<!--
---
title: "Nvidia NVML metric collector"
description: Collect metrics for Nvidia GPUs using the NVML
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/nvidia.md
---
-->
## `nvidia` collector
@@ -72,5 +82,8 @@ Metrics:
* `nv_nvlink_ecc_errors`
* `nv_nvlink_replay_errors`
* `nv_nvlink_recovery_errors`
* `nv_energy`
* `nv_energy_abs`
* `nv_average_power`
Some metrics add the additional sub type tag (`stype`) like the `nv_nvlink_*` metrics set `stype=nvlink,stype-id=<link_number>`.
Some metrics add the additional sub type tag (`stype`) like the `nv_nvlink_*` metrics set `stype=nvlink,stype-id=<link_number>`.

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -9,8 +16,8 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// running average power limit (RAPL) monitoring attributes for a zone
@@ -47,9 +54,10 @@ func (m *RAPLCollector) Init(config json.RawMessage) error {
return nil
}
var err error = nil
m.name = "RAPLCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{
"source": m.name,
@@ -59,7 +67,7 @@ func (m *RAPLCollector) Init(config json.RawMessage) error {
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
err := json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err

View File

@@ -1,3 +1,14 @@
<!--
---
title: RAPL metric collector
description: Collect energy data through the RAPL sysfs interface
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/rapl.md
---
-->
## `rapl` collector
This collector reads running average power limit (RAPL) monitoring attributes to compute average power consumption metrics. See <https://www.kernel.org/doc/html/latest/power/powercap/powercap.html#monitoring-attributes>.

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -6,8 +13,8 @@ import (
"fmt"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
"github.com/ClusterCockpit/go-rocm-smi/pkg/rocm_smi"
)
@@ -45,7 +52,9 @@ func (m *RocmSmiCollector) Init(config json.RawMessage) error {
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "RocmSmiCollector"
// This is for later use, also call it early
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
//m.meta = map[string]string{"source": m.name, "group": "AMD"}

View File

@@ -1,3 +1,14 @@
<!--
---
title: "ROCm SMI metric collector"
description: Collect metrics for AMD GPUs using the SMI library
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/rocmsmi.md
---
-->
## `rocm_smi` collector

View File

@@ -1,11 +1,19 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"fmt"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// These are the fields we read from the JSON configuration
@@ -34,7 +42,9 @@ func (m *SampleCollector) Init(config json.RawMessage) error {
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "SampleCollector"
// This is for later use, also call it early
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
// Tell whether the collector should be run in parallel with others (reading files, ...)
// or it should be run serially, mostly for collectors actually doing measurements
// because they should not measure the execution of the other collectors

View File

@@ -1,12 +1,20 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"fmt"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// These are the fields we read from the JSON configuration
@@ -33,7 +41,9 @@ func (m *SampleTimerCollector) Init(name string, config json.RawMessage) error {
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "SampleTimerCollector"
// This is for later use, also call it early
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
m.meta = map[string]string{"source": m.name, "group": "SAMPLE"}

View File

@@ -1,17 +1,23 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"bufio"
"encoding/json"
"fmt"
"math"
"os"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const SCHEDSTATFILE = `/proc/schedstat`
@@ -40,37 +46,37 @@ type SchedstatCollector struct {
// Called once by the collector manager
// All tags, meta data tags and metrics that do not change over the runtime should be set here
func (m *SchedstatCollector) Init(config json.RawMessage) error {
var err error = nil
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "SchedstatCollector"
// This is for later use, also call it early
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
// Tell whether the collector should be run in parallel with others (reading files, ...)
// or it should be run serially, mostly for collectors acutally doing measurements
// because they should not measure the execution of the other collectors
m.parallel = true
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
m.meta = map[string]string{"source": m.name, "group": "SCHEDSTAT"}
m.meta = map[string]string{
"source": m.name,
"group": "SCHEDSTAT",
}
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
if err := json.Unmarshal(config, &m.config); err != nil {
return fmt.Errorf("%s Init(): Error reading config: %w", m.name, err)
}
}
// Check input file
file, err := os.Open(string(SCHEDSTATFILE))
file, err := os.Open(SCHEDSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return fmt.Errorf("%s Init(): Failed opening scheduler statistics file \"%s\": %w", m.name, SCHEDSTATFILE, err)
}
defer file.Close()
// Pre-generate tags for all CPUs
num_cpus := 0
m.cputags = make(map[string]map[string]string)
m.olddata = make(map[string]map[string]int64)
scanner := bufio.NewScanner(file)
@@ -82,11 +88,19 @@ func (m *SchedstatCollector) Init(config json.RawMessage) error {
cpu, _ := strconv.Atoi(cpustr)
running, _ := strconv.ParseInt(linefields[7], 10, 64)
waiting, _ := strconv.ParseInt(linefields[8], 10, 64)
m.cputags[linefields[0]] = map[string]string{"type": "hwthread", "type-id": fmt.Sprintf("%d", cpu)}
m.olddata[linefields[0]] = map[string]int64{"running": running, "waiting": waiting}
num_cpus++
m.cputags[linefields[0]] = map[string]string{
"type": "hwthread",
"type-id": fmt.Sprintf("%d", cpu),
}
m.olddata[linefields[0]] = map[string]int64{
"running": running,
"waiting": waiting,
}
}
}
if err := file.Close(); err != nil {
return fmt.Errorf("%s Init(): Failed closing scheduler statistics file \"%s\": %w", m.name, SCHEDSTATFILE, err)
}
// Save current timestamp
m.lastTimestamp = time.Now()
@@ -102,8 +116,8 @@ func (m *SchedstatCollector) ParseProcLine(linefields []string, tags map[string]
diff_running := running - m.olddata[linefields[0]]["running"]
diff_waiting := waiting - m.olddata[linefields[0]]["waiting"]
var l_running float64 = float64(diff_running) / tsdelta.Seconds() / (math.Pow(1000, 3))
var l_waiting float64 = float64(diff_waiting) / tsdelta.Seconds() / (math.Pow(1000, 3))
l_running := float64(diff_running) / tsdelta.Seconds() / 1000_000_000
l_waiting := float64(diff_waiting) / tsdelta.Seconds() / 1000_000_000
m.olddata[linefields[0]]["running"] = running
m.olddata[linefields[0]]["waiting"] = waiting
@@ -127,11 +141,19 @@ func (m *SchedstatCollector) Read(interval time.Duration, output chan lp.CCMessa
now := time.Now()
tsdelta := now.Sub(m.lastTimestamp)
file, err := os.Open(string(SCHEDSTATFILE))
file, err := os.Open(SCHEDSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to open file '%s': %v", SCHEDSTATFILE, err))
}
defer file.Close()
defer func() {
if err := file.Close(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to close file '%s': %v", SCHEDSTATFILE, err))
}
}()
scanner := bufio.NewScanner(file)
for scanner.Scan() {

View File

@@ -1,3 +1,13 @@
<!--
---
title: SchedStat Metric collector
description: Collect metrics from `/proc/schedstat`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/schedstat.md
---
-->
## `schedstat` collector
```json
@@ -8,4 +18,4 @@
The `schedstat` collector reads data from /proc/schedstat and calculates a load value, separated by hwthread. This might be useful to detect bad cpu pinning on shared nodes etc.
Metric:
* `cpu_load_core`
* `cpu_load_core`

View File

@@ -1,13 +1,21 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"fmt"
"runtime"
"syscall"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
type SelfCollectorConfig struct {
@@ -27,7 +35,9 @@ type SelfCollector struct {
func (m *SelfCollector) Init(config json.RawMessage) error {
var err error = nil
m.name = "SelfCollector"
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Self"}
m.tags = map[string]string{"type": "node"}

View File

@@ -1,3 +1,14 @@
<!--
---
title: Self-monitoring metric collector
description: Collect metrics from the execution of cc-metric-collector itself
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/self.md
---
-->
## `self` collector
```json

View File

@@ -0,0 +1,350 @@
package collectors
import (
"encoding/json"
"fmt"
"os"
"os/exec"
"os/user"
"path/filepath"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
type SlurmJobData struct {
MemoryUsage float64
MaxMemoryUsage float64
LimitMemoryUsage float64
CpuUsageUser float64
CpuUsageSys float64
CpuSet []int
}
type SlurmCgroupsConfig struct {
CgroupBase string `json:"cgroup_base"`
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
UseSudo bool `json:"use_sudo,omitempty"`
}
type SlurmCgroupCollector struct {
metricCollector
config SlurmCgroupsConfig
meta map[string]string
tags map[string]string
allCPUs []int
cpuUsed map[int]bool
cgroupBase string
excludeMetrics map[string]struct{}
useSudo bool
}
const defaultCgroupBase = "/sys/fs/cgroup/system.slice/slurmstepd.scope"
func ParseCPUs(cpuset string) ([]int, error) {
var result []int
if cpuset == "" {
return result, nil
}
for r := range strings.SplitSeq(cpuset, ",") {
if strings.Contains(r, "-") {
parts := strings.Split(r, "-")
if len(parts) != 2 {
return nil, fmt.Errorf("invalid CPU range: %s", r)
}
start, err := strconv.Atoi(strings.TrimSpace(parts[0]))
if err != nil {
return nil, fmt.Errorf("invalid CPU range start: %s", parts[0])
}
end, err := strconv.Atoi(strings.TrimSpace(parts[1]))
if err != nil {
return nil, fmt.Errorf("invalid CPU range end: %s", parts[1])
}
for i := start; i <= end; i++ {
result = append(result, i)
}
} else {
cpu, err := strconv.Atoi(strings.TrimSpace(r))
if err != nil {
return nil, fmt.Errorf("invalid CPU ID: %s", r)
}
result = append(result, cpu)
}
}
return result, nil
}
func GetAllCPUs() ([]int, error) {
data, err := os.ReadFile("/sys/devices/system/cpu/online")
if err != nil {
return nil, fmt.Errorf("failed to read /sys/devices/system/cpu/online: %v", err)
}
return ParseCPUs(strings.TrimSpace(string(data)))
}
func (m *SlurmCgroupCollector) isExcluded(metric string) bool {
_, found := m.excludeMetrics[metric]
return found
}
func (m *SlurmCgroupCollector) readFile(path string) ([]byte, error) {
if m.useSudo {
cmd := exec.Command("sudo", "cat", path)
return cmd.Output()
}
return os.ReadFile(path)
}
func (m *SlurmCgroupCollector) Init(config json.RawMessage) error {
var err error
m.name = "SlurmCgroupCollector"
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "SLURM"}
m.tags = map[string]string{"type": "hwthread"}
m.cpuUsed = make(map[int]bool)
m.cgroupBase = defaultCgroupBase
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
m.excludeMetrics = make(map[string]struct{})
for _, metric := range m.config.ExcludeMetrics {
m.excludeMetrics[metric] = struct{}{}
}
if m.config.CgroupBase != "" {
m.cgroupBase = m.config.CgroupBase
}
}
m.useSudo = m.config.UseSudo
if !m.useSudo {
user, err := user.Current()
if err != nil {
cclog.ComponentError(m.name, "Failed to get current user:", err.Error())
return err
}
if user.Uid != "0" {
cclog.ComponentError(m.name, "Reading cgroup files requires root privileges (or enable use_sudo in config)")
return fmt.Errorf("not root")
}
}
m.allCPUs, err = GetAllCPUs()
if err != nil {
cclog.ComponentError(m.name, "Error reading online CPUs:", err.Error())
return err
}
m.init = true
return nil
}
func (m *SlurmCgroupCollector) ReadJobData(jobdir string) (SlurmJobData, error) {
jobdata := SlurmJobData{
MemoryUsage: 0,
MaxMemoryUsage: 0,
LimitMemoryUsage: 0,
CpuUsageUser: 0,
CpuUsageSys: 0,
CpuSet: []int{},
}
cg := func(f string) string { return filepath.Join(m.cgroupBase, jobdir, f) }
memUsage, err := m.readFile(cg("memory.current"))
if err == nil {
x, err := strconv.ParseFloat(strings.TrimSpace(string(memUsage)), 64)
if err == nil {
jobdata.MemoryUsage = x
}
}
maxMem, err := m.readFile(cg("memory.peak"))
if err == nil {
x, err := strconv.ParseFloat(strings.TrimSpace(string(maxMem)), 64)
if err == nil {
jobdata.MaxMemoryUsage = x
}
}
limitMem, err := m.readFile(cg("memory.max"))
if err == nil {
x, err := strconv.ParseFloat(strings.TrimSpace(string(limitMem)), 64)
if err == nil {
jobdata.LimitMemoryUsage = x
}
}
cpuStat, err := m.readFile(cg("cpu.stat"))
if err == nil {
lines := strings.Split(strings.TrimSpace(string(cpuStat)), "\n")
var usageUsec, userUsec, systemUsec float64
for _, line := range lines {
fields := strings.Fields(line)
if len(fields) < 2 {
continue
}
value, err := strconv.ParseFloat(fields[1], 64)
if err != nil {
continue
}
switch fields[0] {
case "usage_usec":
usageUsec = value
case "user_usec":
userUsec = value
case "system_usec":
systemUsec = value
}
}
if usageUsec > 0 {
jobdata.CpuUsageUser = (userUsec * 100 / usageUsec)
jobdata.CpuUsageSys = (systemUsec * 100 / usageUsec)
}
}
cpuSet, err := m.readFile(cg("cpuset.cpus"))
if err == nil {
cpus, err := ParseCPUs(strings.TrimSpace(string(cpuSet)))
if err == nil {
jobdata.CpuSet = cpus
}
}
return jobdata, nil
}
func (m *SlurmCgroupCollector) Read(interval time.Duration, output chan lp.CCMessage) {
timestamp := time.Now()
for k := range m.cpuUsed {
delete(m.cpuUsed, k)
}
globPattern := filepath.Join(m.cgroupBase, "job_*")
jobDirs, err := filepath.Glob(globPattern)
if err != nil {
cclog.ComponentError(m.name, "Error globbing job directories:", err.Error())
return
}
for _, jdir := range jobDirs {
jKey := filepath.Base(jdir)
jobdata, err := m.ReadJobData(jKey)
if err != nil {
cclog.ComponentError(m.name, "Error reading job data for", jKey, ":", err.Error())
continue
}
if len(jobdata.CpuSet) > 0 {
coreCount := float64(len(jobdata.CpuSet))
for _, cpu := range jobdata.CpuSet {
coreTags := map[string]string{
"type": "hwthread",
"type-id": fmt.Sprintf("%d", cpu),
}
if coreCount > 0 && !m.isExcluded("job_mem_used") {
memPerCore := jobdata.MemoryUsage / coreCount
if y, err := lp.NewMessage("job_mem_used", coreTags, m.meta, map[string]interface{}{"value": memPerCore}, timestamp); err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
}
if coreCount > 0 && !m.isExcluded("job_max_mem_used") {
maxMemPerCore := jobdata.MaxMemoryUsage / coreCount
if y, err := lp.NewMessage("job_max_mem_used", coreTags, m.meta, map[string]interface{}{"value": maxMemPerCore}, timestamp); err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
}
if coreCount > 0 && !m.isExcluded("job_mem_limit") {
limitPerCore := jobdata.LimitMemoryUsage / coreCount
if y, err := lp.NewMessage("job_mem_limit", coreTags, m.meta, map[string]interface{}{"value": limitPerCore}, timestamp); err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
}
if coreCount > 0 && !m.isExcluded("job_user_cpu") {
cpuUserPerCore := jobdata.CpuUsageUser / coreCount
if y, err := lp.NewMessage("job_user_cpu", coreTags, m.meta, map[string]interface{}{"value": cpuUserPerCore}, timestamp); err == nil {
y.AddMeta("unit", "%")
output <- y
}
}
if coreCount > 0 && !m.isExcluded("job_sys_cpu") {
cpuSysPerCore := jobdata.CpuUsageSys / coreCount
if y, err := lp.NewMessage("job_sys_cpu", coreTags, m.meta, map[string]interface{}{"value": cpuSysPerCore}, timestamp); err == nil {
y.AddMeta("unit", "%")
output <- y
}
}
m.cpuUsed[cpu] = true
}
}
}
for _, cpu := range m.allCPUs {
if !m.cpuUsed[cpu] {
coreTags := map[string]string{
"type": "hwthread",
"type-id": fmt.Sprintf("%d", cpu),
}
if !m.isExcluded("job_mem_used") {
if y, err := lp.NewMessage("job_mem_used", coreTags, m.meta, map[string]interface{}{"value": 0}, timestamp); err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
}
if !m.isExcluded("job_max_mem_used") {
if y, err := lp.NewMessage("job_max_mem_used", coreTags, m.meta, map[string]interface{}{"value": 0}, timestamp); err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
}
if !m.isExcluded("job_mem_limit") {
if y, err := lp.NewMessage("job_mem_limit", coreTags, m.meta, map[string]interface{}{"value": 0}, timestamp); err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
}
if !m.isExcluded("job_user_cpu") {
if y, err := lp.NewMessage("job_user_cpu", coreTags, m.meta, map[string]interface{}{"value": 0}, timestamp); err == nil {
y.AddMeta("unit", "%")
output <- y
}
}
if !m.isExcluded("job_sys_cpu") {
if y, err := lp.NewMessage("job_sys_cpu", coreTags, m.meta, map[string]interface{}{"value": 0}, timestamp); err == nil {
y.AddMeta("unit", "%")
output <- y
}
}
}
}
}
func (m *SlurmCgroupCollector) Close() {
m.init = false
}

View File

@@ -0,0 +1,50 @@
<!--
---
title: Slurm cgroup metric collector
description: Collect per-core memory and CPU usage for SLURM jobs from cgroup v2
categories: [cc-metric-collector]
tags: ['Admin']
weight: 3
hugo_path: docs/reference/cc-metric-collector/collectors/slurm_cgroup.md
---
-->
## `slurm_cgroup` collector
The `slurm_cgroup` collector reads job-specific resource metrics from the cgroup v2 filesystem and provides **hwthread** metrics for memory and CPU usage of running SLURM jobs.
### Example configuration
```json
"slurm_cgroup": {
"cgroup_base": "/sys/fs/cgroup/system.slice/slurmstepd.scope",
"exclude_metrics": [
"job_sys_cpu",
"job_mem_limit"
],
"use_sudo": false
}
```
* The `cgroup_base` parameter (optional) can be set to specify the root path to SLURM job cgroups. The default is `/sys/fs/cgroup/system.slice/slurmstepd.scope`.
* The `exclude_metrics` array can be used to suppress individual metrics from being sent to the sink.
* The cgroups metrics are only available for root users. If password-less sudo is configured, you can enable sudo in the configuration.
### Reported metrics
All metrics are available **per hardware thread** :
* `job_mem_used` (`unit=Bytes`): Current memory usage of the job
* `job_max_mem_used` (`unit=Bytes`): Peak memory usage
* `job_mem_limit` (`unit=Bytes`): Cgroup memory limit
* `job_user_cpu` (`unit=%`): User CPU utilization percentage
* `job_sys_cpu` (`unit=%`): System CPU utilization percentage
Each metric has tags:
* `type=hwthread`
* `type-id=<core_id>`
### Limitations
* **cgroups v2 required:** This collector only supports systems running with cgroups v2 (unified hierarchy).

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
@@ -9,8 +16,8 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
// See: https://www.kernel.org/doc/html/latest/hwmon/sysfs-interface.html
@@ -51,7 +58,9 @@ func (m *TempCollector) Init(config json.RawMessage) error {
m.name = "TempCollector"
m.parallel = true
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
@@ -110,7 +119,7 @@ func (m *TempCollector) Init(config json.RawMessage) error {
sensor.metricName = sensor.label
}
sensor.metricName = strings.ToLower(sensor.metricName)
sensor.metricName = strings.Replace(sensor.metricName, " ", "_", -1)
sensor.metricName = strings.ReplaceAll(sensor.metricName, " ", "_")
// Add temperature prefix, if required
if !strings.Contains(sensor.metricName, "temp") {
sensor.metricName = "temp_" + sensor.metricName

View File

@@ -1,3 +1,14 @@
<!--
---
title: Temperature metric collector
description: Collect thermal metrics from `/sys/class/hwmon/*`
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/temp.md
---
-->
## `tempstat` collector

View File

@@ -1,15 +1,21 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package collectors
import (
"encoding/json"
"errors"
"fmt"
"log"
"os/exec"
"strings"
"time"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
)
const MAX_NUM_PROCS = 10
@@ -29,12 +35,17 @@ func (m *TopProcsCollector) Init(config json.RawMessage) error {
var err error
m.name = "TopProcsCollector"
m.parallel = true
m.tags = map[string]string{"type": "node"}
m.meta = map[string]string{"source": m.name, "group": "TopProcs"}
m.tags = map[string]string{
"type": "node",
}
m.meta = map[string]string{
"source": m.name,
"group": "TopProcs",
}
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
return err
return fmt.Errorf("%s Init(): json.Unmarshal() failed: %w", m.name, err)
}
} else {
m.config.Num_procs = int(DEFAULT_NUM_PROCS)
@@ -42,12 +53,13 @@ func (m *TopProcsCollector) Init(config json.RawMessage) error {
if m.config.Num_procs <= 0 || m.config.Num_procs > MAX_NUM_PROCS {
return fmt.Errorf("num_procs option must be set in 'topprocs' config (range: 1-%d)", MAX_NUM_PROCS)
}
m.setup()
if err := m.setup(); err != nil {
return fmt.Errorf("%s Init(): setup() call failed: %w", m.name, err)
}
command := exec.Command("ps", "-Ao", "comm", "--sort=-pcpu")
command.Wait()
_, err = command.Output()
if err != nil {
return errors.New("failed to execute command")
return fmt.Errorf("%s Init(): failed to get output from command: %w", m.name, err)
}
m.init = true
return nil
@@ -58,10 +70,11 @@ func (m *TopProcsCollector) Read(interval time.Duration, output chan lp.CCMessag
return
}
command := exec.Command("ps", "-Ao", "comm", "--sort=-pcpu")
command.Wait()
stdout, err := command.Output()
if err != nil {
log.Print(m.name, err)
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to read output from command \"%s\": %v", command.String(), err))
return
}

View File

@@ -1,3 +1,15 @@
<!--
---
title: TopProcs collector
description: Collect infos about most CPU-consuming processes
categories: [cc-metric-collector]
tags: ['Admin']
weight: 2
hugo_path: docs/reference/cc-metric-collector/collectors/topprocs.md
---
-->
## `topprocs` collector

View File

@@ -4,7 +4,7 @@ The configuration of the CC metric collector consists of five configuration file
## Global configuration
The global file contains the paths to the other four files and some global options.
The global file contains the paths to the other four files and some global options. You can find examples in `example_configs`.
```json
{

View File

@@ -1,6 +1,19 @@
{
"cpufreq": {},
"cpufreq_cpuinfo": {},
"cpustat": {
"exclude_metrics": [
"cpu_idle"
]
},
"diskstat": {
"exclude_metrics": [
"disk_total"
],
"exclude_mounts": [
"slurm-tmpfs"
]
},
"gpfs": {
"exclude_filesystem": [
"test_fs"
@@ -21,6 +34,8 @@
},
"numastats": {},
"nvidia": {},
"schedstat": {
},
"tempstat": {
"report_max_temperature": true,
"report_critical_temperature": true,
@@ -38,4 +53,4 @@
"topprocs": {
"num_procs": 5
}
}
}

57
go.mod
View File

@@ -1,48 +1,45 @@
module github.com/ClusterCockpit/cc-metric-collector
go 1.23.4
toolchain go1.23.7
go 1.24.0
require (
github.com/ClusterCockpit/cc-lib v0.1.1
github.com/ClusterCockpit/cc-lib/v2 v2.1.0
github.com/ClusterCockpit/go-rocm-smi v0.3.0
github.com/NVIDIA/go-nvml v0.12.0-2
github.com/PaesslerAG/gval v1.2.2
github.com/fsnotify/fsnotify v1.7.0
github.com/gorilla/mux v1.8.1
github.com/influxdata/influxdb-client-go/v2 v2.14.0
github.com/NVIDIA/go-nvml v0.13.0-1
github.com/PaesslerAG/gval v1.2.4
github.com/fsnotify/fsnotify v1.9.0
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf
github.com/influxdata/line-protocol/v2 v2.2.1
github.com/nats-io/nats.go v1.39.0
github.com/prometheus/client_golang v1.20.5
github.com/stmcginnis/gofish v0.15.0
github.com/tklauser/go-sysconf v0.3.13
github.com/tklauser/go-sysconf v0.3.16
golang.design/x/thread v0.0.0-20210122121316-335e9adffdf1
golang.org/x/exp v0.0.0-20250215185904-eff6e970281f
golang.org/x/sys v0.30.0
golang.org/x/sys v0.40.0
)
require (
github.com/ClusterCockpit/cc-backend v1.4.2 // indirect
github.com/ClusterCockpit/cc-units v0.4.0 // indirect
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/expr-lang/expr v1.17.0 // indirect
github.com/expr-lang/expr v1.17.7 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/influxdata/influxdb-client-go/v2 v2.14.0 // indirect
github.com/influxdata/line-protocol/v2 v2.2.1 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nats-io/nkeys v0.4.9 // indirect
github.com/nats-io/nats.go v1.48.0 // indirect
github.com/nats-io/nkeys v0.4.12 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/oapi-codegen/runtime v1.1.1 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.55.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/oapi-codegen/runtime v1.1.2 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.5 // indirect
github.com/prometheus/procfs v0.19.2 // indirect
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1 // indirect
github.com/shopspring/decimal v1.3.1 // indirect
github.com/tklauser/numcpus v0.7.0 // indirect
golang.org/x/crypto v0.35.0 // indirect
golang.org/x/net v0.36.0 // indirect
google.golang.org/protobuf v1.35.2 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/stmcginnis/gofish v0.20.0 // indirect
github.com/tklauser/numcpus v0.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/crypto v0.47.0 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/net v0.49.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
)

133
go.sum
View File

@@ -1,21 +1,17 @@
github.com/ClusterCockpit/cc-backend v1.4.2 h1:kTOzqkh9N0564N9nqQThnSs7TAfg8RLgvSm00e5HtIc=
github.com/ClusterCockpit/cc-backend v1.4.2/go.mod h1:g8TNHXe4AXej26snu2//jO3mUF980elT93iV/k11O/c=
github.com/ClusterCockpit/cc-lib v0.1.0-beta.1 h1:dz9j0g2cod8+SMDjuoIY6ISpiHHeekhX6yQaeiwiwJw=
github.com/ClusterCockpit/cc-lib v0.1.0-beta.1/go.mod h1:kXMskla1i5ZSfXW0vVRIHgGeXMU5zu2PzYOYnUaOr80=
github.com/ClusterCockpit/cc-lib v0.1.1 h1:AXZWYUzgTaE/WdxLNSWPR7FJoA5WlzvYZxw4gIw3gNw=
github.com/ClusterCockpit/cc-lib v0.1.1/go.mod h1:SHKcWW/+kN+pcofAtHJFxvmx1FV0VIJuQv5PuT0HDcc=
github.com/ClusterCockpit/cc-units v0.4.0 h1:zP5DOu99GmErW0tCDf0gcLrlWt42RQ9dpoONEOh4cI0=
github.com/ClusterCockpit/cc-units v0.4.0/go.mod h1:3S3PAhAayS3pbgcT4q9Vn9VJw22Op51X0YimtG77zBw=
github.com/ClusterCockpit/cc-lib/v2 v2.1.0 h1:B6l6h0IjfEuY9DU6aVM3fSsj24lQ1eudXK9QTKmJjqg=
github.com/ClusterCockpit/cc-lib/v2 v2.1.0/go.mod h1:JuxMAuEOaLLNEnnL9U3ejha8kMvsSatLdKPZEgJw6iw=
github.com/ClusterCockpit/go-rocm-smi v0.3.0 h1:1qZnSpG7/NyLtc7AjqnUL9Jb8xtqG1nMVgp69rJfaR8=
github.com/ClusterCockpit/go-rocm-smi v0.3.0/go.mod h1:+I3UMeX3OlizXDf1WpGD43W4KGZZGVSGmny6rTeOnWA=
github.com/NVIDIA/go-nvml v0.11.6-0/go.mod h1:hy7HYeQy335x6nEss0Ne3PYqleRa6Ct+VKD9RQ4nyFs=
github.com/NVIDIA/go-nvml v0.12.0-2 h1:Sg239yy7jmopu/cuvYauoMj9fOpcGMngxVxxS1EBXeY=
github.com/NVIDIA/go-nvml v0.12.0-2/go.mod h1:7ruy85eOM73muOc/I37euONSwEyFqZsv5ED9AogD4G0=
github.com/PaesslerAG/gval v1.2.2 h1:Y7iBzhgE09IGTt5QgGQ2IdaYYYOU134YGHBThD+wm9E=
github.com/PaesslerAG/gval v1.2.2/go.mod h1:XRFLwvmkTEdYziLdaCeCa5ImcGVrfQbeNUbVR+C6xac=
github.com/NVIDIA/go-nvml v0.13.0-1 h1:OLX8Jq3dONuPOQPC7rndB6+iDmDakw0XTYgzMxObkEw=
github.com/NVIDIA/go-nvml v0.13.0-1/go.mod h1:+KNA7c7gIBH7SKSJ1ntlwkfN80zdx8ovl4hrK3LmPt4=
github.com/PaesslerAG/gval v1.2.4 h1:rhX7MpjJlcxYwL2eTTYIOBUyEKZ+A96T9vQySWkVUiU=
github.com/PaesslerAG/gval v1.2.4/go.mod h1:XRFLwvmkTEdYziLdaCeCa5ImcGVrfQbeNUbVR+C6xac=
github.com/PaesslerAG/jsonpath v0.1.0 h1:gADYeifvlqK3R3i2cR5B4DGgxLXIPb3TRTH1mGi0jPI=
github.com/PaesslerAG/jsonpath v0.1.0/go.mod h1:4BzmtoM/PI8fPO4aQGIusjGxGir2BzcV0grWtFzq1Y8=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/antithesishq/antithesis-sdk-go v0.5.0-default-no-op h1:Ucf+QxEKMbPogRO5guBNe5cgd9uZgfoJLOYs8WWhtjM=
github.com/antithesishq/antithesis-sdk-go v0.5.0-default-no-op/go.mod h1:IUpT2DPAKh6i/YhSbt6Gl3v2yvUZjmKncl7U91fup7E=
github.com/apapsch/go-jsonmerge/v2 v2.0.0 h1:axGnT1gRIfimI7gJifB699GoE/oq+F2MU7Dml6nw9rQ=
github.com/apapsch/go-jsonmerge/v2 v2.0.0/go.mod h1:lvDnEdqiQrp0O42VQGgmlKpxL1AP2+08jFMw88y4klk=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -27,20 +23,20 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/expr-lang/expr v1.16.9 h1:WUAzmR0JNI9JCiF0/ewwHB1gmcGw5wW7nWt8gc6PpCI=
github.com/expr-lang/expr v1.16.9/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/expr-lang/expr v1.17.0 h1:+vpszOyzKLQXC9VF+wA8cVA0tlA984/Wabc/1hF9Whg=
github.com/expr-lang/expr v1.17.0/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/expr-lang/expr v1.17.7 h1:Q0xY/e/2aCIp8g9s/LGvMDCC5PxYlvHgDZRQ4y16JX8=
github.com/expr-lang/expr v1.17.7/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/frankban/quicktest v1.11.0/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.11.2/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.13.0 h1:yNZif1OkDfNoDfb9zZa9aXIpejNR4F23Wely0c+Qdqk=
github.com/frankban/quicktest v1.13.0/go.mod h1:qLE0fzW0VuyUAJgPU19zByoIr0HtCHN/r/VLSOOIySU=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-tpm v0.9.7 h1:u89J4tUUeDTlH8xxC3CTW7OHZjbjKoHdQ9W7gCUhtxA=
github.com/google/go-tpm v0.9.7/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
@@ -57,8 +53,8 @@ github.com/influxdata/line-protocol/v2 v2.1.0/go.mod h1:QKw43hdUBg3GTk2iC3iyCxks
github.com/influxdata/line-protocol/v2 v2.2.1 h1:EAPkqJ9Km4uAxtMRgUubJyqAr6zgWM0dznKMLRauQRE=
github.com/influxdata/line-protocol/v2 v2.2.1/go.mod h1:DmB3Cnh+3oxmG6LOBIxce4oaL4CPj3OmMPgvauXh+tM=
github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@@ -68,72 +64,75 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/minio/highwayhash v1.0.4-0.20251030100505-070ab1a87a76 h1:KGuD/pM2JpL9FAYvBrnBBeENKZNh6eNtjqytV6TYjnk=
github.com/minio/highwayhash v1.0.4-0.20251030100505-070ab1a87a76/go.mod h1:GGYsuwP/fPD6Y9hMiXuapVvlIUEhFhMTh0rxU3ik1LQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nats-io/nats.go v1.39.0 h1:2/yg2JQjiYYKLwDuBzV0FbB2sIV+eFNkEevlRi4n9lI=
github.com/nats-io/nats.go v1.39.0/go.mod h1:MgRb8oOdigA6cYpEPhXJuRVH6UE/V4jblJ2jQ27IXYM=
github.com/nats-io/nkeys v0.4.9 h1:qe9Faq2Gxwi6RZnZMXfmGMZkg3afLLOtrU+gDZJ35b0=
github.com/nats-io/nkeys v0.4.9/go.mod h1:jcMqs+FLG+W5YO36OX6wFIFcmpdAns+w1Wm6D3I/evE=
github.com/nats-io/jwt/v2 v2.8.0 h1:K7uzyz50+yGZDO5o772eRE7atlcSEENpL7P+b74JV1g=
github.com/nats-io/jwt/v2 v2.8.0/go.mod h1:me11pOkwObtcBNR8AiMrUbtVOUGkqYjMQZ6jnSdVUIA=
github.com/nats-io/nats-server/v2 v2.12.3 h1:KRv+1n7lddMVgkJPQer+pt36TcO0ENxjilBmeWdjcHs=
github.com/nats-io/nats-server/v2 v2.12.3/go.mod h1:MQXjG9WjyXKz9koWzUc3jYUMKD8x3CLmTNy91IQQz3Y=
github.com/nats-io/nats.go v1.48.0 h1:pSFyXApG+yWU/TgbKCjmm5K4wrHu86231/w84qRVR+U=
github.com/nats-io/nats.go v1.48.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.12 h1:nssm7JKOG9/x4J8II47VWCL1Ds29avyiQDRn0ckMvDc=
github.com/nats-io/nkeys v0.4.12/go.mod h1:MT59A1HYcjIcyQDJStTfaOY6vhy9XTUjOFo+SVsvpBg=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/oapi-codegen/runtime v1.1.1 h1:EXLHh0DXIJnWhdRPN2w4MXAzFyE4CskzhNLUmtpMYro=
github.com/oapi-codegen/runtime v1.1.1/go.mod h1:SK9X900oXmPWilYR5/WKPzt3Kqxn/uS/+lbpREv+eCg=
github.com/oapi-codegen/runtime v1.1.2 h1:P2+CubHq8fO4Q6fV1tqDBZHCwpVpvPg7oKiYzQgXIyI=
github.com/oapi-codegen/runtime v1.1.2/go.mod h1:SK9X900oXmPWilYR5/WKPzt3Kqxn/uS/+lbpREv+eCg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=
github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1 h1:lZUw3E0/J3roVtGQ+SCrUrg3ON6NgVqpn3+iol9aGu4=
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY=
github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8=
github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/spkg/bom v0.0.0-20160624110644-59b7046e48ad/go.mod h1:qLr4V1qq6nMqFKkMo8ZTx3f+BZEkzsRUY10Xsm2mwU0=
github.com/stmcginnis/gofish v0.15.0 h1:8TG41+lvJk/0Nf8CIIYErxbMlQUy80W0JFRZP3Ld82A=
github.com/stmcginnis/gofish v0.15.0/go.mod h1:BLDSFTp8pDlf/xDbLZa+F7f7eW0E/CHCboggsu8CznI=
github.com/stmcginnis/gofish v0.20.0 h1:hH2V2Qe898F2wWT1loApnkDUrXXiLKqbSlMaH3Y1n08=
github.com/stmcginnis/gofish v0.20.0/go.mod h1:PzF5i8ecRG9A2ol8XT64npKUunyraJ+7t0kYMpQAtqU=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tklauser/go-sysconf v0.3.13 h1:GBUpcahXSpR2xN01jhkNAbTLRk2Yzgggk8IM08lq3r4=
github.com/tklauser/go-sysconf v0.3.13/go.mod h1:zwleP4Q4OehZHGn4CYZDipCgg9usW5IJePewFCGVEa0=
github.com/tklauser/numcpus v0.7.0 h1:yjuerZP127QG9m5Zh/mSO4wqurYil27tHrqwRoRjpr4=
github.com/tklauser/numcpus v0.7.0/go.mod h1:bb6dMVcj8A42tSE7i32fsIUCbQNllK5iDguyOZRUzAY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.16 h1:frioLaCQSsF5Cy1jgRBrzr6t502KIIwQ0MArYICU0nA=
github.com/tklauser/go-sysconf v0.3.16/go.mod h1:/qNL9xxDhc7tx3HSRsLWNnuzbVfh3e7gh/BmM179nYI=
github.com/tklauser/numcpus v0.11.0 h1:nSTwhKH5e1dMNsCdVBukSZrURJRoHbSEQjdEbY+9RXw=
github.com/tklauser/numcpus v0.11.0/go.mod h1:z+LwcLq54uWZTX0u/bGobaV34u6V7KNlTZejzM6/3MQ=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
golang.design/x/thread v0.0.0-20210122121316-335e9adffdf1 h1:P7S/GeHBAFEZIYp0ePPs2kHXoazz8q2KsyxHyQVGCJg=
golang.design/x/thread v0.0.0-20210122121316-335e9adffdf1/go.mod h1:9CWpnTUmlQkfdpdutA1nNf4iE5lAVt3QZOu0Z6hahBE=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.35.0 h1:b15kiHdrGCHrP6LvwaQ3c03kgNhhiMgvlhxHQhmg2Xs=
golang.org/x/crypto v0.35.0/go.mod h1:dy7dXNW32cAb/6/PRuTNsix8T+vJAqvuIy5Bli/x0YQ=
golang.org/x/exp v0.0.0-20250215185904-eff6e970281f h1:oFMYAjX0867ZD2jcNiLBrI9BdpmEkvPyi5YrBGXbamg=
golang.org/x/exp v0.0.0-20250215185904-eff6e970281f/go.mod h1:BHOTPb3L19zxehTsLoJXVaTktb06DFgmdW6Wb9s8jqk=
golang.org/x/net v0.31.0 h1:68CPQngjLL0r2AlUKiSxtQFKvzRVbnzLwMUn5SzcLHo=
golang.org/x/net v0.31.0/go.mod h1:P4fl1q7dY2hnZFxEk4pPSkDHF+QqjitcnDjUQyMM+pM=
golang.org/x/net v0.36.0 h1:vWF2fRbw4qslQsQzgFqZff+BItCvGFQqKzKIzx1rmoA=
golang.org/x/net v0.36.0/go.mod h1:bFmbeoIPfrw4sMHNhb4J9f6+tPziuGjq7Jk/38fxi1I=
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
golang.org/x/sys v0.0.0-20210122093101-04d7465088b8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,3 +1,14 @@
<!--
---
title: Metric Aggregator
description: Subsystem for evaluating expressions on metrics (deprecated)
categories: [cc-metric-collector]
tags: ['Developer']
weight: 1
hugo_path: docs/reference/cc-metric-collector/internal/metricaggregator/_index.md
---
-->
# The MetricAggregator
In some cases, further combination of metrics or raw values is required. For that strings like `foo + 1` with runtime dependent `foo` need to be evaluated. The MetricAggregator relies on the [`gval`](https://github.com/PaesslerAG/gval) Golang package to perform all expression evaluation. The `gval` package provides the basic arithmetic operations but the MetricAggregator defines additional ones.
@@ -35,4 +46,4 @@ The MetricAggregator provides these functions additional to the `Full` language
## Limitations
- Since the metrics are written in JSON files which do not allow `""` without proper escaping inside of JSON strings, you have to use `''` for strings.
- Since `\` is interpreted by JSON as escape character, it cannot be used in metrics. But it is required to write regular expressions. So instead of `/`, use `%` and the MetricAggregator replaces them after reading the JSON file.
- Since `\` is interpreted by JSON as escape character, it cannot be used in metrics. But it is required to write regular expressions. So instead of `/`, use `%` and the MetricAggregator replaces them after reading the JSON file.

View File

@@ -1,17 +1,25 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package metricAggregator
import (
"context"
"fmt"
"maps"
"math"
"os"
"strings"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
topo "github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
"github.com/PaesslerAG/gval"
@@ -114,9 +122,7 @@ func (c *metricAggregator) Init(output chan lp.CCMessage) error {
func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics []lp.CCMessage) {
vars := make(map[string]interface{})
for k, v := range c.constants {
vars[k] = v
}
maps.Copy(vars, c.constants)
vars["starttime"] = starttime
vars["endtime"] = endtime
for _, f := range c.functions {

View File

@@ -1,13 +1,19 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package metricAggregator
import (
"errors"
"fmt"
"regexp"
"slices"
"strings"
"golang.org/x/exp/slices"
topo "github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
)
@@ -162,7 +168,7 @@ func medianfunc(args interface{}) (interface{}, error) {
func lenfunc(args interface{}) (interface{}, error) {
var err error = nil
var length int = 0
length := 0
switch values := args.(type) {
case []float64:
length = len(values)
@@ -231,7 +237,7 @@ func matchfunc(args ...interface{}) (interface{}, error) {
case string:
switch total := args[1].(type) {
case string:
smatch := strings.Replace(match, "%", "\\", -1)
smatch := strings.ReplaceAll(match, "%", "\\")
regex, err := regexp.Compile(smatch)
if err != nil {
return false, err

View File

@@ -1,11 +1,22 @@
<!--
---
title: Message Router
description: Routing component inside cc-metric-collector
categories: [cc-metric-collector]
tags: ['Developer']
weight: 1
hugo_path: docs/reference/cc-metric-collector/internal/metricrouter/_index.md
---
-->
# CC Metric Router
The CCMetric router sits in between the collectors and the sinks and can be used to add and remove tags to/from traversing [CCMessages](https://pkg.go.dev/github.com/ClusterCockpit/cc-energy-manager@v0.0.0-20240919152819-92a17f2da4f7/pkg/cc-message.
The CCMetric router sits in between the collectors and the sinks and can be used to add and remove tags to/from traversing [CCMessages](https://pkg.go.dev/github.com/ClusterCockpit/cc-lib/ccMessage).
# Configuration
**Note**: Use the [message processor configuration](../../pkg/messageProcessor/README.md) with option `process_messages`.
**Note**: Use the [message processor configuration](https://github.com/ClusterCockpit/cc-lib/blob/main/messageProcessor/README.md) with option `process_messages`.
```json
{
@@ -69,7 +80,7 @@ The CCMetric router sits in between the collectors and the sinks and can be used
There are three main options `add_tags`, `delete_tags` and `interval_timestamp`. `add_tags` and `delete_tags` are lists consisting of dicts with `key`, `value` and `if`. The `value` can be omitted in the `delete_tags` part as it only uses the `key` for removal. The `interval_timestamp` setting means that a unique timestamp is applied to all metrics traversing the router during an interval.
**Note**: Use the [message processor configuration](../../pkg/messageProcessor/README.md) (option `process_messages`) instead of `add_tags`, `delete_tags`, `drop_metrics`, `drop_metrics_if`, `rename_metrics`, `normalize_units` and `change_unit_prefix`. These options are deprecated and will be removed in future versions. Until then, they are added to the message processor.
**Note**: Use the [message processor configuration](https://github.com/ClusterCockpit/cc-lib/blob/main/messageProcessor/README.md) (option `process_messages`) instead of `add_tags`, `delete_tags`, `drop_metrics`, `drop_metrics_if`, `rename_metrics`, `normalize_units` and `change_unit_prefix`. These options are deprecated and will be removed in future versions. Until then, they are added to the message processor.
# Processing order in the router
@@ -225,13 +236,13 @@ __deprecated__
The cc-metric-collector tries to read the data from the system as it is reported. If available, it tries to read the metric unit from the system as well (e.g. from `/proc/meminfo`). The problem is that, depending on the source, the metric units are named differently. Just think about `byte`, `Byte`, `B`, `bytes`, ...
The [cc-units](https://github.com/ClusterCockpit/cc-units) package provides us a normalization option to use the same metric unit name for all metrics. It this option is set to true, all `unit` meta tags are normalized.
The [cc-units](https://github.com/ClusterCockpit/cc-lib/ccUnits) package provides us a normalization option to use the same metric unit name for all metrics. It this option is set to true, all `unit` meta tags are normalized.
## The `change_unit_prefix` section
__deprecated__
It is often the case that metrics are reported by the system using a rather outdated unit prefix (like `/proc/meminfo` still uses kByte despite current memory sizes are in the GByte range). If you want to change the prefix of a unit, you can do that with the help of [cc-units](https://github.com/ClusterCockpit/cc-units). The setting works on the metric name and requires the new prefix for the metric. The cc-units package determines the scaling factor.
It is often the case that metrics are reported by the system using a rather outdated unit prefix (like `/proc/meminfo` still uses kByte despite current memory sizes are in the GByte range). If you want to change the prefix of a unit, you can do that with the help of [cc-units](https://github.com/ClusterCockpit/cc-lib/ccUnits). The setting works on the metric name and requires the new prefix for the metric. The cc-units package determines the scaling factor.
# Aggregate metric values of the current interval with the `interval_aggregates` option
@@ -263,7 +274,7 @@ The above configuration, collects all metric values for metrics evaluating `if`
If you are not interested in the input metrics `sub_metric_%d+` at all, you can add the same condition used here to the `drop_metrics_if` section to drop them.
Use cases for `interval_aggregates`:
- Combine multiple metrics of the a collector to a new one like the [MemstatCollector](../../collectors/memstatMetric.md) does it for `mem_used`)):
- Combine multiple metrics of the a collector to a new one like the [MemstatCollector](../../collectors/memstatMetric.md) does it for `mem_used`:
```json
{
"name" : "mem_used",

View File

@@ -1,12 +1,19 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package metricRouter
import (
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
agg "github.com/ClusterCockpit/cc-metric-collector/internal/metricAggregator"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
@@ -44,7 +51,7 @@ type MetricCache interface {
}
func (c *metricCache) Init(output chan lp.CCMessage, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) error {
var err error = nil
var err error
c.done = make(chan bool)
c.wg = wg
c.ticker = ticker
@@ -154,8 +161,8 @@ func (c *metricCache) DeleteAggregation(name string) error {
// is the current one, index=1 the last interval and so on. Returns and empty array if a wrong index
// is given (negative index, index larger than configured number of total intervals, ...)
func (c *metricCache) GetPeriod(index int) (time.Time, time.Time, []lp.CCMessage) {
var start time.Time = time.Now()
var stop time.Time = time.Now()
start := time.Now()
stop := time.Now()
var metrics []lp.CCMessage
if index >= 0 && index < c.numPeriods {
pindex := c.curPeriod - index

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package metricRouter
import (
@@ -8,10 +15,10 @@ import (
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
mp "github.com/ClusterCockpit/cc-lib/messageProcessor"
lp "github.com/ClusterCockpit/cc-lib/v2/ccMessage"
mp "github.com/ClusterCockpit/cc-lib/v2/messageProcessor"
agg "github.com/ClusterCockpit/cc-metric-collector/internal/metricAggregator"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
@@ -100,10 +107,8 @@ func (r *metricRouter) Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, rout
cclog.ComponentError("MetricRouter", err.Error())
return err
}
r.maxForward = 1
if r.config.MaxForward > r.maxForward {
r.maxForward = r.config.MaxForward
}
r.maxForward = max(1, r.config.MaxForward)
if r.config.NumCacheIntervals > 0 {
r.cache, err = NewCache(r.cache_input, r.ticker, &r.cachewg, r.config.NumCacheIntervals)
if err != nil {
@@ -111,50 +116,74 @@ func (r *metricRouter) Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, rout
return err
}
for _, agg := range r.config.IntervalAgg {
r.cache.AddAggregation(agg.Name, agg.Function, agg.Condition, agg.Tags, agg.Meta)
err = r.cache.AddAggregation(agg.Name, agg.Function, agg.Condition, agg.Tags, agg.Meta)
if err != nil {
return fmt.Errorf("MetricCache AddAggregation() failed: %w", err)
}
}
}
p, err := mp.NewMessageProcessor()
if err != nil {
return fmt.Errorf("initialization of message processor failed: %v", err.Error())
return fmt.Errorf("MessageProcessor NewMessageProcessor() failed: %w", err)
}
r.mp = p
if len(r.config.MessageProcessor) > 0 {
err = r.mp.FromConfigJSON(r.config.MessageProcessor)
if err != nil {
return fmt.Errorf("failed parsing JSON for message processor: %v", err.Error())
return fmt.Errorf("MessageProcessor FromConfigJSON() failed: %w", err)
}
}
for _, mname := range r.config.DropMetrics {
r.mp.AddDropMessagesByName(mname)
err = r.mp.AddDropMessagesByName(mname)
if err != nil {
return fmt.Errorf("MessageProcessor AddDropMessagesByName() failed: %w", err)
}
}
for _, cond := range r.config.DropMetricsIf {
r.mp.AddDropMessagesByCondition(cond)
err = r.mp.AddDropMessagesByCondition(cond)
if err != nil {
return fmt.Errorf("MessageProcessor AddDropMessagesByCondition() failed: %w", err)
}
}
for _, data := range r.config.AddTags {
cond := data.Condition
if cond == "*" {
cond = "true"
}
r.mp.AddAddTagsByCondition(cond, data.Key, data.Value)
err = r.mp.AddAddTagsByCondition(cond, data.Key, data.Value)
if err != nil {
return fmt.Errorf("MessageProcessor AddAddTagsByCondition() failed: %w", err)
}
}
for _, data := range r.config.DelTags {
cond := data.Condition
if cond == "*" {
cond = "true"
}
r.mp.AddDeleteTagsByCondition(cond, data.Key, data.Value)
err = r.mp.AddDeleteTagsByCondition(cond, data.Key, data.Value)
if err != nil {
return fmt.Errorf("MessageProcessor AddDeleteTagsByCondition() failed: %w", err)
}
}
for oldname, newname := range r.config.RenameMetrics {
r.mp.AddRenameMetricByName(oldname, newname)
err = r.mp.AddRenameMetricByName(oldname, newname)
if err != nil {
return fmt.Errorf("MessageProcessor AddRenameMetricByName() failed: %w", err)
}
}
for metricName, prefix := range r.config.ChangeUnitPrefix {
r.mp.AddChangeUnitPrefix(fmt.Sprintf("name == '%s'", metricName), prefix)
err = r.mp.AddChangeUnitPrefix(fmt.Sprintf("name == '%s'", metricName), prefix)
if err != nil {
return fmt.Errorf("MessageProcessor AddChangeUnitPrefix() failed: %w", err)
}
}
r.mp.SetNormalizeUnits(r.config.NormalizeUnits)
r.mp.AddAddTagsByCondition("true", r.config.HostnameTagName, r.hostname)
err = r.mp.AddAddTagsByCondition("true", r.config.HostnameTagName, r.hostname)
if err != nil {
return fmt.Errorf("MessageProcessor AddAddTagsByCondition() failed: %w", err)
}
// r.config.dropMetrics = make(map[string]bool)
// for _, mname := range r.config.DropMetrics {

View File

@@ -1,3 +1,10 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package ccTopology
import (
@@ -6,11 +13,11 @@ import (
"os"
"path/filepath"
"regexp"
"slices"
"strconv"
"strings"
cclogger "github.com/ClusterCockpit/cc-lib/ccLogger"
"golang.org/x/exp/slices"
cclogger "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
)
const SYSFS_CPUBASE = `/sys/devices/system/cpu`
@@ -73,7 +80,7 @@ func fileToList(path string) []int {
// Create list
list := make([]int, 0)
stringBuffer := strings.TrimSpace(string(buffer))
for _, valueRangeString := range strings.Split(stringBuffer, ",") {
for valueRangeString := range strings.SplitSeq(stringBuffer, ",") {
valueRange := strings.Split(valueRangeString, "-")
switch len(valueRange) {
case 1:

View File

@@ -1,125 +0,0 @@
package hostlist
import (
"fmt"
"regexp"
"sort"
"strconv"
"strings"
)
func Expand(in string) (result []string, err error) {
// Create ranges regular expression
reStNumber := "[[:digit:]]+"
reStRange := reStNumber + "-" + reStNumber
reStOptionalNumberOrRange := "(" + reStNumber + ",|" + reStRange + ",)*"
reStNumberOrRange := "(" + reStNumber + "|" + reStRange + ")"
reStBraceLeft := "[[]"
reStBraceRight := "[]]"
reStRanges := reStBraceLeft +
reStOptionalNumberOrRange +
reStNumberOrRange +
reStBraceRight
reRanges := regexp.MustCompile(reStRanges)
// Create host list regular expression
reStDNSChars := "[a-zA-Z0-9-]+"
reStPrefix := "^(" + reStDNSChars + ")"
reStOptionalSuffix := "(" + reStDNSChars + ")?"
re := regexp.MustCompile(reStPrefix + "([[][0-9,-]+[]])?" + reStOptionalSuffix)
// Remove all delimiters from the input
in = strings.TrimLeft(in, ", ")
for len(in) > 0 {
if v := re.FindStringSubmatch(in); v != nil {
// Remove matched part from the input
lenPrefix := len(v[0])
in = in[lenPrefix:]
// Remove all delimiters from the input
in = strings.TrimLeft(in, ", ")
// matched prefix, range and suffix
hlPrefix := v[1]
hlRanges := v[2]
hlSuffix := v[3]
// Single node without ranges
if hlRanges == "" {
result = append(result, hlPrefix)
continue
}
// Node with ranges
if v := reRanges.FindStringSubmatch(hlRanges); v != nil {
// Remove braces
hlRanges = hlRanges[1 : len(hlRanges)-1]
// Split host ranges at ,
for _, hlRange := range strings.Split(hlRanges, ",") {
// Split host range at -
RangeStartEnd := strings.Split(hlRange, "-")
// Range is only a single number
if len(RangeStartEnd) == 1 {
result = append(result, hlPrefix+RangeStartEnd[0]+hlSuffix)
continue
}
// Range has a start and an end
widthRangeStart := len(RangeStartEnd[0])
widthRangeEnd := len(RangeStartEnd[1])
iStart, _ := strconv.ParseUint(RangeStartEnd[0], 10, 64)
iEnd, _ := strconv.ParseUint(RangeStartEnd[1], 10, 64)
if iStart > iEnd {
return nil, fmt.Errorf("single range start is greater than end: %s", hlRange)
}
// Create print format string for range numbers
doPadding := widthRangeStart == widthRangeEnd
widthPadding := widthRangeStart
var formatString string
if doPadding {
formatString = "%0" + fmt.Sprint(widthPadding) + "d"
} else {
formatString = "%d"
}
formatString = hlPrefix + formatString + hlSuffix
// Add nodes from this range
for i := iStart; i <= iEnd; i++ {
result = append(result, fmt.Sprintf(formatString, i))
}
}
} else {
return nil, fmt.Errorf("not at hostlist range: %s", hlRanges)
}
} else {
return nil, fmt.Errorf("not a hostlist: %s", in)
}
}
if result != nil {
// sort
sort.Strings(result)
// uniq
previous := 1
for current := 1; current < len(result); current++ {
if result[current-1] != result[current] {
if previous != current {
result[previous] = result[current]
}
previous++
}
}
result = result[:previous]
}
return
}

View File

@@ -1,126 +0,0 @@
package hostlist
import (
"testing"
)
func TestExpand(t *testing.T) {
// Compare two slices of strings
equal := func(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i, v := range a {
if v != b[i] {
return false
}
}
return true
}
type testDefinition struct {
input string
resultExpected []string
errorExpected bool
}
expandTests := []testDefinition{
{
// Single node
input: "n1",
resultExpected: []string{"n1"},
errorExpected: false,
},
{
// Single node, duplicated
input: "n1,n1",
resultExpected: []string{"n1"},
errorExpected: false,
},
{
// Single node with padding
input: "n[01]",
resultExpected: []string{"n01"},
errorExpected: false,
},
{
// Single node with suffix
input: "n[01]-p",
resultExpected: []string{"n01-p"},
errorExpected: false,
},
{
// Multiple nodes with a single range
input: "n[1-2]",
resultExpected: []string{"n1", "n2"},
errorExpected: false,
},
{
// Multiple nodes with a single range and a single index
input: "n[1-2,3]",
resultExpected: []string{"n1", "n2", "n3"},
errorExpected: false,
},
{
// Multiple nodes with different prefixes
input: "n[1-2],m[1,2]",
resultExpected: []string{"m1", "m2", "n1", "n2"},
errorExpected: false,
},
{
// Multiple nodes with different suffixes
input: "n[1-2]-p,n[1,2]-q",
resultExpected: []string{"n1-p", "n1-q", "n2-p", "n2-q"},
errorExpected: false,
},
{
// Multiple nodes with and without node ranges
input: " n09, n[01-04,06-07,09] , , n10,n04",
resultExpected: []string{"n01", "n02", "n03", "n04", "n06", "n07", "n09", "n10"},
errorExpected: false,
},
{
// Forbidden DNS character
input: "n@",
resultExpected: []string{},
errorExpected: true,
},
{
// Forbidden range
input: "n[1-2-2,3]",
resultExpected: []string{},
errorExpected: true,
},
{
// Forbidden range limits
input: "n[2-1]",
resultExpected: []string{},
errorExpected: true,
},
}
for _, expandTest := range expandTests {
result, err := Expand(expandTest.input)
hasError := err != nil
if hasError != expandTest.errorExpected && hasError {
t.Errorf("Expand('%s') failed: unexpected error '%v'",
expandTest.input, err)
continue
}
if hasError != expandTest.errorExpected && !hasError {
t.Errorf("Expand('%s') did not fail as expected: got result '%+v'",
expandTest.input, result)
continue
}
if !hasError && !equal(result, expandTest.resultExpected) {
t.Errorf("Expand('%s') failed: got result '%+v', expected result '%v'",
expandTest.input, result, expandTest.resultExpected)
continue
}
t.Logf("Checked hostlist.Expand('%s'): result = '%+v', err = '%v'",
expandTest.input, result, err)
}
}

View File

@@ -1,3 +1,14 @@
<!--
---
title: Multi-channel Ticker
description: Timer ticker that sends out the tick to multiple channels
categories: [cc-metric-collector]
tags: ['Developer']
weight: 1
hugo_path: docs/reference/cc-metric-collector/pkg/multichanticker/_index.md
---
-->
# MultiChanTicker
The idea of this ticker is to multiply the output channels. The original Golang `time.Ticker` provides only a single output channel, so the signal can only be received by a single other class. This ticker allows to add multiple channels which get all notified about the time tick.

View File

@@ -1,9 +1,16 @@
// Copyright (C) NHR@FAU, University Erlangen-Nuremberg.
// All rights reserved. This file is part of cc-lib.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
// additional authors:
// Holger Obermaier (NHR@KIT)
package multiChanTicker
import (
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/v2/ccLogger"
)
type multiChanTicker struct {

View File

@@ -1,4 +1,6 @@
Package: cc-metric-collector
Section: misc
Priority: optional
Version: {VERSION}
Installed-Size: {INSTALLED_SIZE}
Architecture: {ARCH}

View File

@@ -44,6 +44,8 @@ def group_to_json(groupfile):
scope = "socket"
if "PWR" in calc:
scope = "socket"
if "UMC" in calc:
scope = "socket"
m = {"name" : metric, "calc": calc, "type" : scope, "publish" : True}
metrics.append(m)