Compare commits

..

339 Commits

Author SHA1 Message Date
Thomas Gruber
4c05557941 Merge branch 'develop' into nfsio_remount_fix 2025-04-16 13:31:22 +02:00
Thomas Gruber
02623f8c9d Change to cc-lib (#135)
* Change to ccMessage from cc-lib

* Remove local development path

* Use receiver, sinks, ccLogger and ccConfig from cc-lib

* Fix ccLogger import path

* Update CI
2025-04-16 13:00:13 +02:00
Thomas Roehl
e080a82b55 Merge branch 'develop' into nfsio_remount_fix 2025-03-17 17:50:29 +01:00
Thomas Roehl
b5520efc25 Fix artifacts in netstat collector of not done cc-lib switch 2025-03-15 04:02:26 +01:00
Thomas Roehl
d2b1bad1b8 Fix artifacts of not done cc-lib switch 2025-03-15 04:01:01 +01:00
Thomas Roehl
5762afa40b Delete mountpoint when it vanishes, not just its data 2025-03-15 03:48:38 +01:00
brinkcoder
0e57c8db1c Add derived_values for numastats (#134)
* Check creation of CCMessage in NATS receiver

* add derived_values for numastats

* change to ccMessage

* remove vim command artefact

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
2025-02-19 11:35:32 +01:00
brinkcoder
f2f38c81af Add exclude_devices to iostat (#133)
* Check creation of CCMessage in NATS receiver

* add exclude_device for iostatMetric

* add md file

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
2025-02-19 11:34:56 +01:00
brinkcoder
f9acc51a50 Add derived values for nfsiostat (#132)
* Check creation of CCMessage in NATS receiver

* add derived_values for nfsiostatMetric

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
2025-02-19 11:34:06 +01:00
brinkcoder
87346e2eae Fix excluded metrics for diskstat and add exclude_mounts (#131)
* Check creation of CCMessage in NATS receiver

* fix excluded metrics and add optional mountpoint exclude

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
2025-02-19 11:33:13 +01:00
brinkcoder
0f92f10b66 Add optional interface alias in netstat (#130)
* Check creation of CCMessage in NATS receiver

* add optional interface aliases for netstatMetric

* small fix

---------

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
2025-02-19 11:32:15 +01:00
Michael Panzlaff
6901b06e44 Rename 'process_message' to 'process_messages' in metricRouter config
This makes the behavior more consistent with the other modules, which
have their MessageProcessor named 'process_messages'. This most likely
was just a typo.
2025-02-03 15:23:51 +01:00
Thomas Roehl
7b343d0bab Use CCMessage FromBytes instead of Influx's decoder 2024-12-27 15:22:59 +00:00
Thomas Roehl
7d3180b526 Check creation of CCMessage in NATS receiver 2024-12-27 15:00:48 +00:00
Thomas Roehl
70a6afc549 Generate HUGO inputs out of Markdown files 2024-12-23 17:55:48 +01:00
Thomas Roehl
e02a018327 Mark all JSON config fields of message processor as omitempty 2024-12-23 17:52:34 +01:00
Thomas Roehl
bcecdd033b Fix documentation of RAPL collector 2024-12-23 17:51:43 +01:00
Thomas Roehl
2645ffeff3 Merge branch 'main' into develop 2024-12-21 02:39:08 +01:00
Thomas Roehl
ee4e1baf5b Fix Release part 2024-12-20 21:07:33 +01:00
Thomas Roehl
94c80307e8 Fix Release part 2024-12-20 21:03:03 +01:00
Thomas Roehl
f859fe178d Fix Release part 2024-12-20 20:56:22 +01:00
Thomas Roehl
0ff8c8616e Fix UBI9 RPM name 2024-12-20 20:45:29 +01:00
Thomas Roehl
32e93b362e Remove go installation through apt for Ubuntu 24.04 2024-12-20 20:41:59 +01:00
Thomas Roehl
1b61b5dae4 Overwrite files created by previous tag 2024-12-20 20:38:59 +01:00
Thomas Roehl
e968aa1991 Fix wrongly named packages 2024-12-20 20:33:10 +01:00
Thomas Gruber
160c3cde47 Develop to main (#127)
* Remove go-toolkit as build requirement for RPM builds if run in CI

* Remove condition around BuildRequires and use go-toolkit for RPM builds

* use go-toolkit for RPM builds

* Install go-toolkit to fulfill build requirements for RPM

* Add golang-race for UBI9 and Alma9

* Fix wrongly named packages
2024-12-20 20:30:04 +01:00
Thomas Gruber
d2a38e3844 Merge branch 'main' into develop 2024-12-20 20:27:48 +01:00
Thomas Roehl
1f35f6d3ca Fix wrongly named packages 2024-12-20 20:26:38 +01:00
Thomas Roehl
e7d76dd0d8 Add golang-race for UBI9 and Alma9 2024-12-20 20:18:31 +01:00
Thomas Roehl
482ae046cb Install go-toolkit to fulfill build requirements for RPM 2024-12-20 20:18:31 +01:00
Thomas Roehl
d00b14f3e8 use go-toolkit for RPM builds 2024-12-20 20:18:31 +01:00
Thomas Roehl
4fd8c87157 Remove condition around BuildRequires and use go-toolkit for RPM builds 2024-12-20 20:18:31 +01:00
Thomas Roehl
1e2e43742f Remove go-toolkit as build requirement for RPM builds if run in CI 2024-12-20 20:18:31 +01:00
Thomas Roehl
7e6870c7b3 Add golang-race for UBI9 and Alma9 2024-12-20 20:15:59 +01:00
Thomas Roehl
d881093524 Install go-toolkit to fulfill build requirements for RPM 2024-12-20 20:12:03 +01:00
Thomas Roehl
c01096c157 use go-toolkit for RPM builds 2024-12-20 18:49:28 +01:00
Thomas Roehl
3d70c8afc9 Remove condition around BuildRequires and use go-toolkit for RPM builds 2024-12-20 18:43:21 +01:00
Thomas Roehl
7ee85a07dc Remove go-toolkit as build requirement for RPM builds if run in CI 2024-12-20 18:28:32 +01:00
Thomas Roehl
5ca669951f Merge branch 'develop' 2024-12-20 18:18:10 +01:00
Thomas Roehl
02344f30a4 Add Alma9, UBI9 and Ubuntu 24.04 to release workflow 2024-12-20 17:52:09 +01:00
Thomas Roehl
c2c8f3c73e Fix dependency installation for UBI9 2024-12-20 17:46:46 +01:00
Thomas Roehl
b3f1b63617 Add UBI9 build and different container sources for UBI images 2024-12-20 17:43:14 +01:00
Thomas Roehl
100d306473 Add more test builds to runonce workflow 2024-12-20 17:36:54 +01:00
Thomas Roehl
ea04b7ed28 Fix job name 2024-12-20 17:32:52 +01:00
Thomas Roehl
31994f44fa Update Github Action with new OS versions and action versions 2024-12-20 17:32:01 +01:00
Thomas Roehl
708e145020 natsSink: Use flush timer handling from httpSink and some comments 2024-12-20 17:09:05 +01:00
Thomas Roehl
d0af494149 httpSink: remove unused extended tag list 2024-12-20 17:09:05 +01:00
Thomas Roehl
95c0803a3c Use common function to add message to ILP encoder 2024-12-20 17:09:05 +01:00
Thomas Roehl
87309fcd2b natsSink: Use flush timer handling from httpSink and some comments 2024-12-20 17:07:16 +01:00
Thomas Roehl
8915c2fd5d httpSink: remove unused extended tag list 2024-12-20 17:06:50 +01:00
Thomas Roehl
27faafef78 Use common function to add message to ILP encoder 2024-12-20 15:43:19 +01:00
Thomas Gruber
7840de7b82 Merge develop branch into main (#123)
* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update cc-metric-collector.init

* Allow selection of timestamp precision in HttpSink

* Add comment about precision requirement for cc-metric-store

* Fix for API changes in gofish@v0.15.0

* Update requirements to latest version

* Read sensors through redfish

* Update golang toolchain to 1.21

* Remove stray error check

* Update main config in configuration.md

* Update Release action to use golang 1.22 stable release, no golang RPMs anymore

* Update runonce action to use golang 1.22 stable release, no golang RPMs anymore

* Update README.md

Use right JSON type in configuration

* Update sink's README

* Test whether ipmitool or ipmi-sensors can be executed without errors

* Little fixes to the prometheus sink (#115)

* Add uint64 to float64 cast option

* Add prometheus sink to the list of available sinks

* Add aggregated counters by gpu for nvlink errors

---------

Co-authored-by: Michael Schwarz <schwarz@uni-paderborn.de>

* Ccmessage migration (#119)

* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update cc-metric-collector.init

* Allow selection of timestamp precision in HttpSink

* Add comment about precision requirement for cc-metric-store

* Fix for API changes in gofish@v0.15.0

* Update requirements to latest version

* Read sensors through redfish

* Update golang toolchain to 1.21

* Remove stray error check

* Update main config in configuration.md

* Update Release action to use golang 1.22 stable release, no golang RPMs anymore

* Update runonce action to use golang 1.22 stable release, no golang RPMs anymore

* Switch to CCMessage for all files.

---------

Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>

* Switch to ccmessage also for latest additions in nvidiaMetric

* New Message processor (#118)

* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update cc-metric-collector.init

* Allow selection of timestamp precision in HttpSink

* Add comment about precision requirement for cc-metric-store

* Fix for API changes in gofish@v0.15.0

* Update requirements to latest version

* Read sensors through redfish

* Update golang toolchain to 1.21

* Remove stray error check

* Update main config in configuration.md

* Update Release action to use golang 1.22 stable release, no golang RPMs anymore

* Update runonce action to use golang 1.22 stable release, no golang RPMs anymore

* New message processor to check whether a message should be dropped or manipulate it in flight

* Create a copy of message before manipulation

---------

Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>

* Update collector's Makefile and go.mod/sum files

* Use message processor in router, all sinks and all receivers

* Add support for credential file (NKEY) to NATS sink and receiver

* Fix JSON keys in message processor configuration

* Update docs for message processor, router and the default router config file

* Add link to expr syntax and fix regex matching docs

* Update sample collectors

* Minor style change in collector manager

* Some helpers for ccTopology

* LIKWID collector: write log owner change only once

* Fix for metrics without units and reduce debugging messages for messageProcessor

* Use shorted hostname for hostname added by router

* Define default port for NATS

* CPUstat collector: only add unit for applicable metrics

* Add precision option to all sinks using Influx's encoder

* Add message processor to all sink documentation

* Add units to documentation of cpustat collector

---------

Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: oscarminus <me@oscarminus.de>
Co-authored-by: Michael Schwarz <schwarz@uni-paderborn.de>
2024-12-19 23:00:14 +01:00
Thomas Gruber
21646e1edf New Message processor (#118)
* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update cc-metric-collector.init

* Allow selection of timestamp precision in HttpSink

* Add comment about precision requirement for cc-metric-store

* Fix for API changes in gofish@v0.15.0

* Update requirements to latest version

* Read sensors through redfish

* Update golang toolchain to 1.21

* Remove stray error check

* Update main config in configuration.md

* Update Release action to use golang 1.22 stable release, no golang RPMs anymore

* Update runonce action to use golang 1.22 stable release, no golang RPMs anymore

* New message processor to check whether a message should be dropped or manipulate it in flight

* Create a copy of message before manipulation

---------

Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2024-12-11 19:06:50 +01:00
Thomas Gruber
8837ff4474 Merge 'develop' into 'main' (#121)
* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update cc-metric-collector.init

* Allow selection of timestamp precision in HttpSink

* Add comment about precision requirement for cc-metric-store

* Fix for API changes in gofish@v0.15.0

* Update requirements to latest version

* Read sensors through redfish

* Update golang toolchain to 1.21

* Remove stray error check

* Update main config in configuration.md

* Update Release action to use golang 1.22 stable release, no golang RPMs anymore

* Update runonce action to use golang 1.22 stable release, no golang RPMs anymore

* Update README.md

Use right JSON type in configuration

* Update sink's README

* Test whether ipmitool or ipmi-sensors can be executed without errors

---------

Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2024-11-20 16:50:12 +01:00
Thomas Gruber
8e8be09ed9 Merge latest commits from develop to main branch (#114)
* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update cc-metric-collector.init

* Allow selection of timestamp precision in HttpSink

* Add comment about precision requirement for cc-metric-store

* Fix for API changes in gofish@v0.15.0

* Update requirements to latest version

* Read sensors through redfish

* Update golang toolchain to 1.21

* Remove stray error check

* Update main config in configuration.md

* Update Release action to use golang 1.22 stable release, no golang RPMs anymore

* Update runonce action to use golang 1.22 stable release, no golang RPMs anymore

* Update README.md

Use right JSON type in configuration

* Update sink's README

* Test whether ipmitool or ipmi-sensors can be executed without errors

---------

Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2024-11-20 16:22:39 +01:00
Thomas Gruber
51dda886f1 Update runonce.yml to download golang from official sources 2024-11-14 16:31:51 +01:00
brinkcoder
c96021c7cc Fix: Create lock file if it does not exist in likwidMetric.go (#120)
Co-authored-by: exterr2f <Robert.Externbrink@rub.de>
2024-11-14 16:20:47 +01:00
Thomas Gruber
8f336c1bb7 Update likwidMetric.md 2024-10-08 13:36:46 +02:00
Thomas Gruber
7d3f67f15b Update likwidMetric.md 2024-10-07 14:09:09 +02:00
Thomas Gruber
f6c94e32b3 Update README.md for sinks
Wrong JSON format, it is an object, not a list.
2024-07-15 12:38:34 +02:00
Thomas Gruber
2e7990f87d Update likwidMetric.md 2024-04-18 13:14:32 +02:00
Thomas Gruber
f496db4905 Fix job dependency in Release.yml 2023-12-04 12:26:57 +01:00
Thomas Gruber
6ab45dd3ec Merge develop into main (#109)
* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update to line-protocol/v2

* Update runonce.yml with Golang 1.20

* Update fsnotify in LIKWID Collector

* Use not a pointer to line-protocol.Encoder

* Simplify Makefile

* Use only as many arguments as required

* Allow sum function to handle non float types

* Allow values to be a slice of type float64, float32, int, int64, int32, bool

* Use generic function to simplify code

* Add missing case for type []int32

* Use generic function to compute minimum

* Use generic function to compute maximum

* Use generic function to compute average

* Add error value to sumAnyType

* Use generic function to compute median

* For older versions of go slices is not part of the installation

* Remove old entries from go.sum

* Use simpler sort function

* Compute metrics ib_total and ib_total_pkts

* Add aggregated metrics.
Add missing units

* Update likwidMetric.go

Fixes a potential bug when `fsnotify.NewWatcher()` fails with an error

* Completly avoid memory allocations in infinibandMetric read()

* Fixed initialization: Initalization and measurements should run in the same thread

* Add safe.directory to Release action

* Fix path after installation to /usr/bin after installation

* ioutil.ReadFile is deprecated: As of Go 1.16, this function simply calls os.ReadFile

* Switch to package slices from the golang 1.21 default library

* Read file line by line

* Read file line by line

* Read file line by line

* Use CamelCase

* Use CamelCase

* Fix function getNumaDomain, it always returned 0

* Avoid type conversion by using Atoi
Avoid copying structs by using pointer access
Increase readability with CamelCase variable names

* Add caching

* Cache CpuData

* Cleanup

* Use init function to initalize cache structure to avoid multi threading problems

* Reuse information from /proc/cpuinfo

* Avoid slice cloning. Directly use the cache

* Add DieList

* Add NumaDomainList and SMTList

* Cleanup

* Add comment

* Lookup core ID from /sys/devices/system/cpu, /proc/cpuinfo is not portable

* Lookup all information from /sys/devices/system/cpu, /proc/cpuinfo is not portable

* Correctly handle lists from /sys

* Add Simultaneous Multithreading siblings

* Replace deprecated thread_siblings_list by core_cpus_list

* Reduce number of required slices

* Allow to send total values per core, socket and node

* Send all metrics with same time stamp
calcEventsetMetrics does only computiation, counter measurement is done before

* Input parameters should be float64 when evaluating to float64

* Send all metrics with same time stamp
calcGlobalMetrics does only computiation, counter measurement is done before

* Remove unused variable gmresults

* Add comments

* Updated go packages

* Add build with golang 1.21

* Switch to checkout action version 4

* Switch to setup-go action version 4

* Add workflow_dispatch to allow manual run of workflow

* Add workflow_dispatch to allow manual run of workflow

* Add release build jobs to runonce.yml

* Switch to golang 1.20 for RHEL based distributions

* Use dnf to download golang

* Remove golang versions before 1.20

* Upgrade Ubuntu focal -> jammy

* Pipe golang tar package directly to tar

* Update golang version

* Fix Ubuntu version number

* Add links to ipmi and redfish receivers

* Fix http server addr format

* github.com/influxdata/line-protocol -> github.com/influxdata/line-protocol/v2/lineprotocol

* Corrected spelling

* Add some comments

* github.com/influxdata/line-protocol -> github.com/influxdata/line-protocol/v2/lineprotocol

* Allow other fields not only field "value"

* Add some basic debugging documentation

* Add some basic debugging documentation

* Use a lock for the flush timer

* Add tags in lexical order as required by AddTag()

* Only access meta data, when it gets used as tag

* Use slice to store lexialicly orderd key value pairs

* Increase golang version requirement to 1.20.

* Avoid package cmp to allow builds with golang v1.20

* Fix: Error NVML library not found did crash
cc-metric-collector with "SIGSEGV: segmentation violation"

* Add config option idle_timeout

* Add basic authentication support

* Add basic authentication support

* Avoid unneccessary memory allocations

* Add documentation for send_*_total values

* Use generic package maps to clone maps

* Reuse flush timer

* Add Influx client options

* Reuse ccTopology functionality

* Do not store unused topology information

* Add batch_size config

* Cleanup

* Use stype and stype-id for the NIC in NetstatCollector

* Wait for concurrent flush operations to finish

* Be more verbose in error messages

* Reverted previous changes.
Made the code to complex without much advantages

* Use line protocol encoder

* Go pkg update

* Stop flush timer, when immediatelly flushing

* Fix: Corrected unlock access to batch slice

* Add config option to specify whether to use GZip compression in influx write requests

* Add asynchron send of encoder metrics

* Use DefaultServeMux instead of github.com/gorilla/mux

* Add config option for HTTP keep-alives

* Be more strict, when parsing json

* Add config option for HTTP request timeout and Retry interval

* Allow more then one background send operation

* Fix %sysusers_create_package args (#108)

%sysusers_create_package requires two arguments. See: https://github.com/systemd/systemd/blob/main/src/rpm/macros.systemd.in#L165

* Add nfsiostat to list of collectors

---------

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <holgerob@gmx.de>
Co-authored-by: Obihörnchen <obihoernchende@gmail.com>
2023-12-04 12:21:26 +01:00
Thomas Gruber
9df1054e32 Update stdoutSink.md 2023-10-10 11:57:13 +02:00
Thomas Gruber
e76eaa86ad Update influxAsyncSink.md 2023-10-10 11:56:42 +02:00
Thomas Gruber
262f0c6a86 Update influxSink.md 2023-10-10 11:56:02 +02:00
Thomas Gruber
b488ff76b1 Update natsSink.md 2023-10-10 11:54:30 +02:00
Thomas Roehl
e42b41f264 Add safe.directory to Release action 2023-08-29 15:39:47 +02:00
Thomas Gruber
195d0794b0 Merge develop branch into main (#106)
* Add cpu_used (all-cpu_idle) to CpustatCollector

* Update to line-protocol/v2

* Update runonce.yml with Golang 1.20

* Update fsnotify in LIKWID Collector

* Use not a pointer to line-protocol.Encoder

* Simplify Makefile

* Use only as many arguments as required

* Allow sum function to handle non float types

* Allow values to be a slice of type float64, float32, int, int64, int32, bool

* Use generic function to simplify code

* Add missing case for type []int32

* Use generic function to compute minimum

* Use generic function to compute maximum

* Use generic function to compute average

* Add error value to sumAnyType

* Use generic function to compute median

* For older versions of go slices is not part of the installation

* Remove old entries from go.sum

* Use simpler sort function

* Compute metrics ib_total and ib_total_pkts

* Add aggregated metrics.
Add missing units

* Update likwidMetric.go

Fixes a potential bug when `fsnotify.NewWatcher()` fails with an error

* Completly avoid memory allocations in infinibandMetric read()

* Fixed initialization: Initalization and measurements should run in the same thread

---------

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2023-08-29 14:12:49 +02:00
Thomas Roehl
3d7bb4cdd7 Merge remote-tracking branch 'origin/main' into develop 2023-03-20 15:43:59 +01:00
Thomas Gruber
94b086acf0 Develop (#102)
* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages

* Check if at least one CPU with frequency information was detected

* Correct type: /proc/stats -> /proc/stat

* Update README.md

* Run ipmitool asynchron.  Improved error handling.

* Corrected some typos

* Add running average power limit (RAPL) metric collector

* Add running average power limit (RAPL) metric collector

* Do not mess up with the orignal configuration

* * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go

* Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30)

* Fix kernel panic for receiver config with missing receiver type

* Add receiver to gather remote IPMI sensor metrics

* Added config option to add ipmi-sensors command line options

* Add documentaion for IPMI receiver

* Update to latest version of included go modules

* Add go.mod to App dependency

* Try to use common metric tags across hardware vendors

* Add IPMI metric: current

* remove prefix enumeration like 01-...

* Add IPMI receiver example configuration to receivers.json

* Minimal formating changes

* Add hostlist package

* Added tests for hostlist Expand()

* Use package hostlist to expand a host list

* Use package hostlist to expand a host list

* Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null

* Updated to latest package versions

* Do not allow unknown fields in JSON configuration file

* Add workflow to customize packages to docs

* NFS I/O Stats Collector (#91)

* Initial version

* Delete values for vanished mount points and  comments

* Fix for Likwid collector (#95)

* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle

* Register cc-metric-collector at Zenodo (#93)

* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>

* Update ipmiMetric.go

* Use latest LIKWID version for builds

* Update README.md

* Remove development stuff from Makefile

* Add Requires(pre) to RPM SPEC file

* Use curly brackets in packaging make targets

* Fix for LIKWID collector with separate measurement thread and inotify watcher on the LIKWID lock (#97)

* Debian does not like underscores in the version

* Update cc-metric-collector.service

Remove dependency services not used by cc-metric-collector

* Add new requirements to module file

* Use customcmd commands if they did not error. (#101)

* Merge develop and main (#99)

* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages

* Check if at least one CPU with frequency information was detected

* Correct type: /proc/stats -> /proc/stat

* Update README.md

* Run ipmitool asynchron.  Improved error handling.

* Corrected some typos

* Add running average power limit (RAPL) metric collector

* Add running average power limit (RAPL) metric collector

* Do not mess up with the orignal configuration

* * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go

* Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30)

* Fix kernel panic for receiver config with missing receiver type

* Add receiver to gather remote IPMI sensor metrics

* Added config option to add ipmi-sensors command line options

* Add documentaion for IPMI receiver

* Update to latest version of included go modules

* Add go.mod to App dependency

* Try to use common metric tags across hardware vendors

* Add IPMI metric: current

* remove prefix enumeration like 01-...

* Add IPMI receiver example configuration to receivers.json

* Minimal formating changes

* Add hostlist package

* Added tests for hostlist Expand()

* Use package hostlist to expand a host list

* Use package hostlist to expand a host list

* Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null

* Updated to latest package versions

* Do not allow unknown fields in JSON configuration file

* Add workflow to customize packages to docs

* NFS I/O Stats Collector (#91)

* Initial version

* Delete values for vanished mount points and  comments

* Fix for Likwid collector (#95)

* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle

* Register cc-metric-collector at Zenodo (#93)

* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>

* Update ipmiMetric.go

* Use latest LIKWID version for builds

* Update README.md

* Remove development stuff from Makefile

* Add Requires(pre) to RPM SPEC file

* Use curly brackets in packaging make targets

* Fix for LIKWID collector with separate measurement thread and inotify watcher on the LIKWID lock (#97)

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>

* Update likwid_perfgroup_to_cc_config.py

* Use customcmd commands if they did not error.

---------

Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>

---------

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
Co-authored-by: fodinabor <5982050+fodinabor@users.noreply.github.com>
2023-03-20 15:17:24 +01:00
fodinabor
ec570f884c Use customcmd commands if they did not error. (#101)
* Merge develop and main (#99)

* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages

* Check if at least one CPU with frequency information was detected

* Correct type: /proc/stats -> /proc/stat

* Update README.md

* Run ipmitool asynchron.  Improved error handling.

* Corrected some typos

* Add running average power limit (RAPL) metric collector

* Add running average power limit (RAPL) metric collector

* Do not mess up with the orignal configuration

* * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go

* Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30)

* Fix kernel panic for receiver config with missing receiver type

* Add receiver to gather remote IPMI sensor metrics

* Added config option to add ipmi-sensors command line options

* Add documentaion for IPMI receiver

* Update to latest version of included go modules

* Add go.mod to App dependency

* Try to use common metric tags across hardware vendors

* Add IPMI metric: current

* remove prefix enumeration like 01-...

* Add IPMI receiver example configuration to receivers.json

* Minimal formating changes

* Add hostlist package

* Added tests for hostlist Expand()

* Use package hostlist to expand a host list

* Use package hostlist to expand a host list

* Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null

* Updated to latest package versions

* Do not allow unknown fields in JSON configuration file

* Add workflow to customize packages to docs

* NFS I/O Stats Collector (#91)

* Initial version

* Delete values for vanished mount points and  comments

* Fix for Likwid collector (#95)

* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle

* Register cc-metric-collector at Zenodo (#93)

* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>

* Update ipmiMetric.go

* Use latest LIKWID version for builds

* Update README.md

* Remove development stuff from Makefile

* Add Requires(pre) to RPM SPEC file

* Use curly brackets in packaging make targets

* Fix for LIKWID collector with separate measurement thread and inotify watcher on the LIKWID lock (#97)

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>

* Update likwid_perfgroup_to_cc_config.py

* Use customcmd commands if they did not error.

---------

Co-authored-by: Thomas Gruber <Thomas.Roehl@googlemail.com>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
2023-02-28 12:02:01 +01:00
Thomas Gruber
abd49a377c Update likwid_perfgroup_to_cc_config.py 2023-01-26 10:21:45 +07:00
Holger Obermaier
1ba08cd148 Add new requirements to module file 2022-12-23 11:42:46 +01:00
Thomas Gruber
94c4153a95 Update cc-metric-collector.service
Remove dependency services not used by cc-metric-collector
2022-12-20 17:48:32 +01:00
Thomas Roehl
de2e522f52 Debian does not like underscores in the version 2022-12-20 13:35:21 +01:00
Thomas Roehl
10df95e3f2 Merge branch 'main' into develop 2022-12-20 13:08:47 +01:00
Thomas Gruber
84e019c693 Merge develop and main (#99)
* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages

* Check if at least one CPU with frequency information was detected

* Correct type: /proc/stats -> /proc/stat

* Update README.md

* Run ipmitool asynchron.  Improved error handling.

* Corrected some typos

* Add running average power limit (RAPL) metric collector

* Add running average power limit (RAPL) metric collector

* Do not mess up with the orignal configuration

* * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go

* Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30)

* Fix kernel panic for receiver config with missing receiver type

* Add receiver to gather remote IPMI sensor metrics

* Added config option to add ipmi-sensors command line options

* Add documentaion for IPMI receiver

* Update to latest version of included go modules

* Add go.mod to App dependency

* Try to use common metric tags across hardware vendors

* Add IPMI metric: current

* remove prefix enumeration like 01-...

* Add IPMI receiver example configuration to receivers.json

* Minimal formating changes

* Add hostlist package

* Added tests for hostlist Expand()

* Use package hostlist to expand a host list

* Use package hostlist to expand a host list

* Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null

* Updated to latest package versions

* Do not allow unknown fields in JSON configuration file

* Add workflow to customize packages to docs

* NFS I/O Stats Collector (#91)

* Initial version

* Delete values for vanished mount points and  comments

* Fix for Likwid collector (#95)

* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle

* Register cc-metric-collector at Zenodo (#93)

* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>

* Update ipmiMetric.go

* Use latest LIKWID version for builds

* Update README.md

* Remove development stuff from Makefile

* Add Requires(pre) to RPM SPEC file

* Use curly brackets in packaging make targets

* Fix for LIKWID collector with separate measurement thread and inotify watcher on the LIKWID lock (#97)

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
2022-12-20 13:08:04 +01:00
Thomas Gruber
ff0833c413 Push LIKWID collector fix into main (#98)
* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages

* Check if at least one CPU with frequency information was detected

* Correct type: /proc/stats -> /proc/stat

* Update README.md

* Run ipmitool asynchron.  Improved error handling.

* Corrected some typos

* Add running average power limit (RAPL) metric collector

* Add running average power limit (RAPL) metric collector

* Do not mess up with the orignal configuration

* * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go

* Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30)

* Fix kernel panic for receiver config with missing receiver type

* Add receiver to gather remote IPMI sensor metrics

* Added config option to add ipmi-sensors command line options

* Add documentaion for IPMI receiver

* Update to latest version of included go modules

* Add go.mod to App dependency

* Try to use common metric tags across hardware vendors

* Add IPMI metric: current

* remove prefix enumeration like 01-...

* Add IPMI receiver example configuration to receivers.json

* Minimal formating changes

* Add hostlist package

* Added tests for hostlist Expand()

* Use package hostlist to expand a host list

* Use package hostlist to expand a host list

* Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null

* Updated to latest package versions

* Do not allow unknown fields in JSON configuration file

* Add workflow to customize packages to docs

* NFS I/O Stats Collector (#91)

* Initial version

* Delete values for vanished mount points and  comments

* Fix for Likwid collector (#95)

* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle

* Register cc-metric-collector at Zenodo (#93)

* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>

* Update ipmiMetric.go

* Use latest LIKWID version for builds

* Update README.md

* Remove development stuff from Makefile

* Add Requires(pre) to RPM SPEC file

* Use curly brackets in packaging make targets

* Fix for LIKWID collector with separate measurement thread and inotify watcher on the LIKWID lock (#97)

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
2022-12-20 13:04:24 +01:00
Thomas Gruber
b0423b842d Merge branch 'main' into develop 2022-12-20 13:02:31 +01:00
Thomas Gruber
6c10c9741a Fix for LIKWID collector with separate measurement thread and inotify watcher on the LIKWID lock (#97) 2022-12-20 12:59:33 +01:00
Thomas Roehl
200e6d6f42 Use curly brackets in packaging make targets 2022-12-19 12:23:43 +01:00
Thomas Roehl
89cfa861cb Add Requires(pre) to RPM SPEC file 2022-12-19 12:18:51 +01:00
Thomas Roehl
7a0e4726e1 Remove development stuff from Makefile 2022-12-19 12:17:10 +01:00
Thomas Gruber
6dbddb4450 Update README.md 2022-12-14 18:47:32 +01:00
Thomas Roehl
2bd386dae7 Use latest LIKWID version for builds 2022-12-14 17:43:41 +01:00
Thomas Gruber
162cce0fda Merge develop branch into main (#96)
* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages

* Check if at least one CPU with frequency information was detected

* Correct type: /proc/stats -> /proc/stat

* Update README.md

* Run ipmitool asynchron.  Improved error handling.

* Corrected some typos

* Add running average power limit (RAPL) metric collector

* Add running average power limit (RAPL) metric collector

* Do not mess up with the orignal configuration

* * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go

* Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30)

* Fix kernel panic for receiver config with missing receiver type

* Add receiver to gather remote IPMI sensor metrics

* Added config option to add ipmi-sensors command line options

* Add documentaion for IPMI receiver

* Update to latest version of included go modules

* Add go.mod to App dependency

* Try to use common metric tags across hardware vendors

* Add IPMI metric: current

* remove prefix enumeration like 01-...

* Add IPMI receiver example configuration to receivers.json

* Minimal formating changes

* Add hostlist package

* Added tests for hostlist Expand()

* Use package hostlist to expand a host list

* Use package hostlist to expand a host list

* Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null

* Updated to latest package versions

* Do not allow unknown fields in JSON configuration file

* Add workflow to customize packages to docs

* NFS I/O Stats Collector (#91)

* Initial version

* Delete values for vanished mount points and  comments

* Fix for Likwid collector (#95)

* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle

* Register cc-metric-collector at Zenodo (#93)

* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>

* Update ipmiMetric.go

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Holger Obermaier <Holger.Obermaier@kit.edu>
2022-12-14 17:02:39 +01:00
Thomas Gruber
155d1b9acf Update ipmiMetric.go 2022-12-14 17:00:09 +01:00
Thomas Gruber
c9b9752b6a Merge branch 'main' into develop 2022-12-14 16:58:12 +01:00
Thomas Gruber
3c8a5e434f Register cc-metric-collector at Zenodo (#93)
* Add initial version of Zenodo project file

* Orcid ID added

* Update .zenodo.json

Co-authored-by: Holger Obermaier <holger.obermaier@kit.edu>
2022-12-14 16:53:44 +01:00
Thomas Gruber
efd4f5feb4 Fix for Likwid collector (#95)
* Run LIKWID in separate thread and check metric type

* Change LIKWID collector documentation to use 'type' instead of 'scope'

* Re-initialize LIKWID after one read is missing due to lock toggle
2022-12-14 16:53:08 +01:00
Thomas Gruber
a1f4dd6a6c NFS I/O Stats Collector (#91)
* Initial version

* Delete values for vanished mount points and  comments
2022-12-14 16:52:53 +01:00
Thomas Roehl
d55e579195 Add workflow to customize packages to docs 2022-12-14 16:50:49 +01:00
Holger Obermaier
b78e83b055 Do not allow unknown fields in JSON configuration file 2022-12-08 14:15:56 +01:00
Holger Obermaier
56b41a9e57 Updated to latest package versions 2022-12-06 14:12:21 +01:00
Holger Obermaier
ae98807ace Some servers return "ConsumedPowerWatt":65535 instead of "ConsumedPowerWatt":null 2022-12-06 13:40:22 +01:00
Holger Obermaier
31a8e63d72 Use package hostlist to expand a host list 2022-12-01 09:48:34 +01:00
Holger Obermaier
6f1f33f3a5 Use package hostlist to expand a host list 2022-12-01 09:25:40 +01:00
Holger Obermaier
a29f0c7e3b Added tests for hostlist Expand() 2022-11-29 17:21:09 +01:00
Holger Obermaier
4fb6ac0140 Add hostlist package 2022-11-29 14:04:31 +01:00
Holger Obermaier
5918f96fd8 Minimal formating changes 2022-11-24 09:48:44 +01:00
Holger Obermaier
8cb87a2165 Add IPMI receiver example configuration to receivers.json 2022-11-23 10:37:31 +01:00
Holger Obermaier
3e91a37dee remove prefix enumeration like 01-... 2022-11-22 17:02:29 +01:00
Holger Obermaier
ed68baeada Add IPMI metric: current 2022-11-22 15:32:41 +01:00
Holger Obermaier
888db31dbf Try to use common metric tags across hardware vendors 2022-11-22 15:09:56 +01:00
Holger Obermaier
c938d32629 Add go.mod to App dependency 2022-11-22 09:45:29 +01:00
Holger Obermaier
d5daf54d4f Update to latest version of included go modules 2022-11-22 09:42:04 +01:00
Holger Obermaier
18bffd7c14 Add documentaion for IPMI receiver 2022-11-21 13:58:30 +01:00
Holger Obermaier
bd0105b370 Added config option to add ipmi-sensors command line options 2022-11-21 13:02:46 +01:00
Holger Obermaier
b1a8674c4c Add receiver to gather remote IPMI sensor metrics 2022-11-18 16:55:11 +01:00
Holger Obermaier
234ad3c54e Fix kernel panic for receiver config with missing receiver type 2022-11-17 11:33:13 +01:00
Holger Obermaier
7bb80780e0 Fixed computing number of physical packages for non continous physical package IDs (e.g. on Ampere Altra Q80-30) 2022-11-16 14:58:11 +01:00
Holger Obermaier
e66d52bb32 * Corrected json config in numastatsMetric.md
* Added some debug output to numastatsMetric.go
2022-11-16 14:10:25 +01:00
Holger Obermaier
9840d0193d Do not mess up with the orignal configuration 2022-11-16 09:37:40 +01:00
Holger Obermaier
ce7eef8d30 Add running average power limit (RAPL) metric collector 2022-11-15 17:15:27 +01:00
Holger Obermaier
92e45ca62c Add running average power limit (RAPL) metric collector 2022-11-15 17:09:26 +01:00
Holger Obermaier
fd10a279fc Corrected some typos 2022-11-14 09:35:02 +01:00
Holger Obermaier
9e63d0ea59 Run ipmitool asynchron. Improved error handling. 2022-11-11 16:16:14 +01:00
Thomas Gruber
f0da07310b Update README.md 2022-11-04 14:53:08 +01:00
Thomas Gruber
76bb033a88 Update README.md 2022-11-04 14:52:09 +01:00
Thomas Gruber
0f35469168 Update httpSink.md 2022-11-04 14:52:05 +01:00
Thomas Roehl
e79601e2e8 Try fixing DEB package 2022-10-13 16:49:58 +02:00
Thomas Roehl
317d36c9dd Try fixing DEB package 2022-10-13 16:46:54 +02:00
Thomas Roehl
821d104656 Try fixing DEB package 2022-10-13 16:42:04 +02:00
Holger Obermaier
deb1bcfa2f Correct type: /proc/stats -> /proc/stat 2022-10-13 15:01:39 +02:00
Holger Obermaier
7a67d5e25f Check if at least one CPU with frequency information was detected 2022-10-13 14:53:55 +02:00
Thomas Gruber
be20f956c2 Add latest development to main branch (#89)
* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs

* Build DEB package for Ubuntu 20.04 for releases

* Fix memstat collector with numa_stats option

* Remove useless prints from MemstatCollector

* Replace ioutils with os and io (#87)

* Use lower case for error strings in RocmSmiCollector

* move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88)

* Add collector for monitoring the execution of cc-metric-collector itself (#81)

* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages
2022-10-10 12:23:51 +02:00
Thomas Gruber
9ae0806aa9 Add collector for monitoring the execution of cc-metric-collector itself (#81)
* Add collector to monitor execution of cc-metric-collector itself

* Register SelfCollector

* Fix import paths for moved packages
2022-10-10 12:18:52 +02:00
Thomas Gruber
4bd71224df move maybe-usable-by-other-cc-components to pkg. Fix all files to use the new paths (#88) 2022-10-10 11:53:11 +02:00
Thomas Roehl
6bf3bfd10a Use lower case for error strings in RocmSmiCollector 2022-10-09 17:05:49 +02:00
Thomas Gruber
0fbff00996 Replace ioutils with os and io (#87) 2022-10-09 17:03:38 +02:00
Thomas Roehl
8849824ba9 Remove useless prints from MemstatCollector 2022-10-09 02:56:15 +02:00
Thomas Roehl
ed511b7c09 Fix memstat collector with numa_stats option 2022-09-28 15:09:36 +02:00
Thomas Roehl
a0acf01dc3 Build DEB package for Ubuntu 20.04 for releases 2022-09-28 12:19:36 +02:00
Thomas Gruber
5b6a2b9018 Merge latest fixed from develop to main (#85)
* InfiniBandCollector: Scale raw readings from octets to bytes

* Fix clock frequency coming from LikwidCollector and update docs
2022-09-12 12:54:40 +02:00
Thomas Roehl
58461f1f72 Fix clock frequency coming from LikwidCollector and update docs 2022-09-09 20:01:21 +02:00
Thomas Röhl
c09d8fb118 InfiniBandCollector: Scale raw readings from octets to bytes 2022-09-09 19:27:20 +02:00
Thomas Roehl
3438972237 Merge branch 'develop' into main 2022-09-07 15:11:26 +02:00
oscarminus
8a3446a596 cpustatMetric.go: Use derived values instead of absolute values (#83)
* cpustatMetric.go: Use derived values instead of absolute values

  The values in /proc/stat are absolute counters related to the boot
  time of the system. To obtain a utilization of the CPU, the changes
  in the counters must be derived according to time. To take only the
  absolute values leads to the fact that changes in the utilization,
  straight with larger values, do not become visible.

* Add new collector for /proc/schedstat

  The `schedstat` collector reads data from /proc/schedstat and calculates
  a load value, separated by hwthread. This might be useful to detect bad
  cpu pinning on shared nodes etc.

Co-authored-by: Michael Schwarz <post@michael-schwarz.name>
2022-09-07 14:13:06 +02:00
oscarminus
88fabc2e83 cpustatMetric.go: Use derived values instead of absolute values (#83)
* cpustatMetric.go: Use derived values instead of absolute values

  The values in /proc/stat are absolute counters related to the boot
  time of the system. To obtain a utilization of the CPU, the changes
  in the counters must be derived according to time. To take only the
  absolute values leads to the fact that changes in the utilization,
  straight with larger values, do not become visible.

* Add new collector for /proc/schedstat

  The `schedstat` collector reads data from /proc/schedstat and calculates
  a load value, separated by hwthread. This might be useful to detect bad
  cpu pinning on shared nodes etc.

Co-authored-by: Michael Schwarz <post@michael-schwarz.name>
2022-09-07 14:09:29 +02:00
Holger Obermaier
503705d442 Allow multiple hosts to share the same client configuration 2022-08-26 11:55:53 +02:00
Holger Obermaier
7ccbf1ebe2 Allow global configuration for redfish devices username, password and endpoint. 2022-08-25 16:47:44 +02:00
Holger Obermaier
60ef0ed116 Fix for servers, which do not set status.state for thermals or powercontrols 2022-08-17 17:37:24 +02:00
Holger Obermaier
a8beec29cc Skip non existing processor metrics URLs 2022-08-17 15:11:21 +02:00
Holger Obermaier
0dd430e7e9 Refactor redfishReceiver. 2022-08-16 15:14:20 +02:00
Holger Obermaier
f7b39d027b url.JoinPath requires go 1.19. For now stay compatible with go 1.18 2022-08-15 15:25:59 +02:00
Holger Obermaier
eaf8b1941d ioutils is depreceated 2022-08-15 15:25:20 +02:00
Holger Obermaier
62f6e4151a Added readProcessorMetrics to read read thermal an power metrics per CPU / GPU 2022-08-15 15:11:29 +02:00
Holger Obermaier
acd092a977 Add redfish receiver documentation 2022-08-11 15:36:18 +02:00
Holger Obermaier
6eb8e3a1f5 Corrected comments. Added additional check 2022-08-10 17:00:47 +02:00
Holger Obermaier
8ba33568a6 Add reading of fan speeds 2022-08-10 16:24:21 +02:00
Holger Obermaier
2ca0359744 Add support to read thermal metrics 2022-08-10 10:30:59 +02:00
Thomas Roehl
a2f0bc37d4 Add runonce job for Golang 1.19 2022-08-03 17:06:28 +02:00
Thomas Roehl
cfcde9b23b Mark code parts as bash 2022-07-28 16:25:32 +02:00
Thomas Roehl
c7d692e27f Use newlines in install lines for readability 2022-07-28 16:24:21 +02:00
Thomas Roehl
c312093d2b Add --owner and --group to install lines 2022-07-28 16:22:39 +02:00
Thomas Roehl
7438b9d245 Add rules files for DEB package 2022-07-27 18:08:15 +02:00
Thomas Roehl
32bb9c5fc0 Update ccMetric README and FromMetric copy 2022-07-27 18:06:41 +02:00
Thomas Roehl
f5ad45e49f Fix old entries in sample scripts 2022-07-27 17:52:36 +02:00
Thomas Roehl
ea33d45d8e Fix link to docs of NumastatsCollector 2022-07-27 17:50:15 +02:00
Thomas Roehl
251ae8e879 Update link to cc-specifications repo with line protocol 2022-07-27 17:46:27 +02:00
Thomas Roehl
edd33d5810 Add docs to README 2022-07-27 17:45:13 +02:00
Thomas Roehl
88b3fe1e41 Add some documentation about building 2022-07-27 17:38:51 +02:00
Thomas Roehl
96b4a2aec1 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-07-26 14:59:27 +02:00
Thomas Roehl
4b5c2f4e37 Some introduction to CC Metric Collector and the other components 2022-07-26 14:59:08 +02:00
Holger Obermaier
f818bf4c11 Read durations as string from json config 2022-07-22 17:48:11 +02:00
Holger Obermaier
aedc1be277 Set HTTP timeout for redfish device connections 2022-07-22 12:06:02 +02:00
Holger Obermaier
c75d394e11 Corrected json syntax for interval and duration 2022-07-14 16:07:45 +02:00
Holger Obermaier
bae36473f7 Minimum requirement: golang version >= 1.18 2022-07-14 13:55:48 +02:00
Thomas Gruber
b3c27e0af5 Merge latest development changes (#80)
* Cleanup: Remove unused code

* Use Golang duration parser for 'interval' and 'duration'
 in main config

* Update handling of LIKWID headers. Download only if not already present in the system. Fixes #73

* Units with cc-units (#64)

* Add option to normalize units with cc-unit

* Add unit conversion to router

* Add option to change unit prefix in the router

* Add to MetricRouter README

* Add order of operations in router to README

* Use second add_tags/del_tags only if metric gets renamed

* Skip disks in DiskstatCollector that have size=0

* Check readability of sensor files in TempCollector

* Fix for --once option

* Rename `cpu` type to `hwthread` (#69)

* Rename 'cpu' type to 'hwthread' to avoid naming clashes with MetricStore and CC-Webfrontend

* Collectors in parallel (#74)

* Provide info to CollectorManager whether the collector can be executed in parallel with others

* Split serial and parallel collectors. Read in parallel first

* Update NvidiaCollector with new metrics, MIG and NvLink support (#75)

* CC topology module update (#76)

* Rename CPU to hardware thread, write some comments

* Do renaming in other parts

* Remove CpuList and SocketList function from metricCollector. Available in ccTopology

* Option to use MIG UUID as subtype-id in NvidiaCollector

* Option to use MIG slice name as subtype-id in NvidiaCollector

* MetricRouter: Fix JSON in README

* Fix for Github Action to really use the selected version

* Remove Ganglia installation in runonce Action and add Go 1.18

* Fix daemon options in init script

* Add separate go.mod files to use it with deprecated 1.16

* Minor updates for Makefiles

* fix string comparison

* AMD ROCm SMI collector (#77)

* Add collector for AMD ROCm SMI metrics

* Fix import path

* Fix imports

* Remove Board Number

* store GPU index explicitly

* Remove board number from description

* Use http instead of ftp to download likwid

* Fix serial number in rocmCollector

* Improved http sink (#78)

* automatic flush in NatsSink

* tweak default options of HttpSink

* shorter cirt. section and retries for HttpSink

* fix error handling

* Remove file added by mistake.

* Use http instead of ftp to download likwid

* Fix serial number in rocmCollector

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>

* Fix: When sending metrics failed the batch size could be exceeded

* Improved dropping of metrics failed to send

* Add memstats and topprocs metric

* Updated to latest modules

* Check that at least one sink is running

* Add drop rate, when send buffer is full

* Allow only one timer at a time

* Use mutex to ensure only on flush timer is running

* Fix for NvidiaCollector when devices are not in MiG mode

* Remove Golang version 1.16 an 1.17 from Action. Latest commits require Golang 1.18

* Use Golang 1.18 in Release action to build RPMs

* Change unit of CpufreqCollector to Hz. That's what the sysfs outputs

* Make wget quiet in Release action to reduce log size

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Lou <lou.knauer@gmx.de>
2022-07-13 10:09:49 +02:00
Thomas Roehl
09b740b82e Make wget quiet in Release action to reduce log size 2022-07-12 12:37:10 +02:00
Thomas Roehl
2adf9484a3 Redo fix for NvidiaCollector and MiG. Got lost somehow 2022-07-12 12:31:24 +02:00
Thomas Roehl
27f17b88af Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-07-12 11:58:50 +02:00
Thomas Roehl
b2bc7b95d3 Change unit of CpufreqCollector to Hz. That's what the sysfs outputs 2022-07-12 11:58:37 +02:00
Thomas Gruber
f79b7b5e2b Merge branch 'main' into develop 2022-07-12 11:36:46 +02:00
Thomas Roehl
b16343e5e2 Use Golang 1.18 in Release action to build RPMs 2022-07-12 11:30:27 +02:00
Thomas Roehl
4fa37a58f2 Remove Golang version 1.16 an 1.17 from Action. Latest commits require Golang 1.18 2022-07-11 16:01:45 +02:00
Thomas Roehl
addbfd40a1 Fix for NvidiaCollector when devices are not in MiG mode 2022-07-11 13:05:15 +02:00
Holger Obermaier
04819d9db2 Use mutex to ensure only on flush timer is running 2022-06-24 09:08:20 +02:00
Holger Obermaier
9ccc5a6ca7 Allow only one timer at a time 2022-06-23 21:53:02 +02:00
Holger Obermaier
b7dcbaebcf Add drop rate, when send buffer is full 2022-06-23 18:27:03 +02:00
Holger Obermaier
a3ac8f2ead Check that at least one sink is running 2022-06-23 15:44:02 +02:00
Holger Obermaier
8e7143a20a Updated to latest modules 2022-06-23 11:49:18 +02:00
Holger Obermaier
3a10f7cfdb Add memstats and topprocs metric 2022-06-23 11:44:06 +02:00
Holger Obermaier
0ca6d1a794 Improved dropping of metrics failed to send 2022-06-21 07:59:24 +02:00
Holger Obermaier
580d21d8bb Fix: When sending metrics failed the batch size could be exceeded 2022-06-20 18:06:27 +02:00
Thomas Roehl
31a38bc17d Update release action 2022-06-09 14:36:25 +02:00
Thomas Roehl
dbdec1eab8 Merge branch 'main' of github.com:ClusterCockpit/cc-metric-collector into main 2022-06-09 12:46:47 +02:00
Thomas Gruber
0d31ec481b Update Release.yml 2022-06-09 12:42:11 +02:00
Thomas Roehl
e22c3287e9 Merge branch 'main' of github.com:ClusterCockpit/cc-metric-collector into main 2022-06-08 15:26:05 +02:00
Thomas Gruber
8d85bd53f1 Merge latest development changes to main branch (#79)
* Cleanup: Remove unused code

* Use Golang duration parser for 'interval' and 'duration'
 in main config

* Update handling of LIKWID headers. Download only if not already present in the system. Fixes #73

* Units with cc-units (#64)

* Add option to normalize units with cc-unit

* Add unit conversion to router

* Add option to change unit prefix in the router

* Add to MetricRouter README

* Add order of operations in router to README

* Use second add_tags/del_tags only if metric gets renamed

* Skip disks in DiskstatCollector that have size=0

* Check readability of sensor files in TempCollector

* Fix for --once option

* Rename `cpu` type to `hwthread` (#69)

* Rename 'cpu' type to 'hwthread' to avoid naming clashes with MetricStore and CC-Webfrontend

* Collectors in parallel (#74)

* Provide info to CollectorManager whether the collector can be executed in parallel with others

* Split serial and parallel collectors. Read in parallel first

* Update NvidiaCollector with new metrics, MIG and NvLink support (#75)

* CC topology module update (#76)

* Rename CPU to hardware thread, write some comments

* Do renaming in other parts

* Remove CpuList and SocketList function from metricCollector. Available in ccTopology

* Option to use MIG UUID as subtype-id in NvidiaCollector

* Option to use MIG slice name as subtype-id in NvidiaCollector

* MetricRouter: Fix JSON in README

* Fix for Github Action to really use the selected version

* Remove Ganglia installation in runonce Action and add Go 1.18

* Fix daemon options in init script

* Add separate go.mod files to use it with deprecated 1.16

* Minor updates for Makefiles

* fix string comparison

* AMD ROCm SMI collector (#77)

* Add collector for AMD ROCm SMI metrics

* Fix import path

* Fix imports

* Remove Board Number

* store GPU index explicitly

* Remove board number from description

* Use http instead of ftp to download likwid

* Fix serial number in rocmCollector

* Improved http sink (#78)

* automatic flush in NatsSink

* tweak default options of HttpSink

* shorter cirt. section and retries for HttpSink

* fix error handling

* Remove file added by mistake.

* Use http instead of ftp to download likwid

* Fix serial number in rocmCollector

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
Co-authored-by: Lou <lou.knauer@gmx.de>
2022-06-08 15:25:40 +02:00
Lou
b732b2d739 Improved http sink (#78)
* automatic flush in NatsSink

* tweak default options of HttpSink

* shorter cirt. section and retries for HttpSink

* fix error handling

* Remove file added by mistake.

* Use http instead of ftp to download likwid

* Fix serial number in rocmCollector

Co-authored-by: Thomas Roehl <thomas.roehl@fau.de>
2022-06-08 14:12:35 +02:00
Thomas Roehl
bef807dd44 Fix serial number in rocmCollector 2022-06-05 15:53:39 +02:00
Thomas Roehl
659d0115c0 Use http instead of ftp to download likwid 2022-06-05 15:50:04 +02:00
Thomas Gruber
e13695307f AMD ROCm SMI collector (#77)
* Add collector for AMD ROCm SMI metrics

* Fix import path

* Fix imports

* Remove Board Number

* store GPU index explicitly

* Remove board number from description
2022-05-25 15:55:43 +02:00
Thomas Roehl
4ed07cad77 fix string comparison 2022-05-25 15:48:55 +02:00
Thomas Roehl
ad5dbd85ea Minor updates for Makefiles 2022-05-25 15:45:21 +02:00
Thomas Roehl
132ebabd45 Add separate go.mod files to use it with deprecated 1.16 2022-05-25 15:35:11 +02:00
Thomas Roehl
f8d91d9cf1 Fix daemon options in init script 2022-05-25 15:16:01 +02:00
Thomas Roehl
cc84a94647 Remove Ganglia installation in runonce Action and add Go 1.18 2022-05-23 17:37:14 +02:00
Thomas Roehl
838b8d824d Fix for Github Action to really use the selected version 2022-05-23 16:50:58 +02:00
Thomas Roehl
7ddc889f06 MetricRouter: Fix JSON in README 2022-05-20 16:06:54 +02:00
Thomas Roehl
500685672b Option to use MIG slice name as subtype-id in NvidiaCollector 2022-05-13 15:26:47 +02:00
Thomas Roehl
d4c89a4206 Option to use MIG UUID as subtype-id in NvidiaCollector 2022-05-13 14:34:32 +02:00
Thomas Gruber
826f364772 CC topology module update (#76)
* Rename CPU to hardware thread, write some comments

* Do renaming in other parts

* Remove CpuList and SocketList function from metricCollector. Available in ccTopology
2022-05-13 14:28:07 +02:00
Thomas Gruber
5df550b208 Update NvidiaCollector with new metrics, MIG and NvLink support (#75) 2022-05-13 14:11:55 +02:00
Thomas Gruber
5c34805918 Collectors in parallel (#74)
* Provide info to CollectorManager whether the collector can be executed in parallel with others

* Split serial and parallel collectors. Read in parallel first
2022-05-13 14:10:39 +02:00
Thomas Gruber
1db5f3b29a Rename cpu type to hwthread (#69)
* Rename 'cpu' type to 'hwthread' to avoid naming clashes with MetricStore and CC-Webfrontend
2022-05-13 14:09:45 +02:00
Thomas Roehl
0623691bab Fix for --once option 2022-05-13 13:50:19 +02:00
Thomas Roehl
9886f14d14 Check readability of sensor files in TempCollector 2022-05-13 13:32:54 +02:00
Thomas Roehl
857903be2b Skip disks in DiskstatCollector that have size=0 2022-05-13 13:31:22 +02:00
Thomas Gruber
80d92d6d28 Units with cc-units (#64)
* Add option to normalize units with cc-unit

* Add unit conversion to router

* Add option to change unit prefix in the router

* Add to MetricRouter README

* Add order of operations in router to README

* Use second add_tags/del_tags only if metric gets renamed
2022-05-13 13:30:02 +02:00
Thomas Roehl
8068e59818 Update handling of LIKWID headers. Download only if not already present in the system. Fixes #73 2022-05-13 13:14:47 +02:00
Thomas Roehl
8abedac0fe Use Golang duration parser for 'interval' and 'duration'
in main config
2022-05-13 12:33:33 +02:00
Holger Obermaier
ee4bd558f1 Cleanup: Remove unused code 2022-05-06 11:44:57 +02:00
Holger Obermaier
186a62a86b Fix: influx sink ignores config batch_size.
Feature: Add Redfish receiver
2022-05-04 13:00:56 +02:00
Holger Obermaier
e098c33179 Add some golang debug options 2022-05-04 12:48:46 +02:00
Thomas Roehl
38d4e0a730 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-05-04 11:54:55 +02:00
Thomas Roehl
54d14519ca Skip mount points in DiskstatCollector if statfs() call does not work (bind mounts, ...) 2022-05-04 11:54:34 +02:00
Holger Obermaier
c35ac9dba8 Flush if batch size is reached 2022-05-04 11:28:06 +02:00
Holger Obermaier
c019f8e7ad Reuse tags and meta data tags 2022-05-03 17:55:33 +02:00
Holger Obermaier
fb6f6a4daa Fix GPFS collector last state handling 2022-05-02 16:57:19 +02:00
Holger Obermaier
9d6d0dbd93 Delete empty tags and meta data tags 2022-04-20 14:39:26 +02:00
Holger Obermaier
c2d4272fdf Clear workerInput channel after done event 2022-04-20 12:36:45 +02:00
Holger Obermaier
8c73095548 Allow to shutdown redfish receiver during metric read 2022-04-20 09:58:02 +02:00
Holger Obermaier
31c5c89a5a Fix: Close done channel 2022-04-19 14:01:23 +02:00
Holger Obermaier
bf9c7e1830 Update requirements 2022-04-19 12:15:51 +02:00
Holger Obermaier
48d34bf564 Adopt sinks.json for new meta_as_tags usage 2022-04-19 12:06:53 +02:00
Holger Obermaier
a1d85fa886 Add redfish receiver 2022-04-19 12:05:03 +02:00
Holger Obermaier
96ee16398e Removed unused done channel and wg wait group 2022-04-19 11:53:11 +02:00
Holger Obermaier
e7b8088c41 Extended go routine use case in sample receiver 2022-04-19 11:42:46 +02:00
Thomas Roehl
017cd58247 Updating page for LikwidCollector 2022-04-05 10:57:09 +02:00
Thomas Roehl
36dd440864 Merge branch 'develop' into main 2022-04-04 15:16:33 +02:00
Thomas Roehl
7b098e0b1b Fix for missing metrics in LikwidCollector is hwthread is inactive 2022-04-04 15:16:11 +02:00
Thomas Roehl
229a57b16a Merge branch 'develop' into main 2022-04-04 11:49:33 +02:00
Thomas Roehl
70a9530aba Set WriteFailedCallback to get some error message 2022-04-04 11:48:54 +02:00
Thomas Roehl
2f0b6057ca Merge branch 'develop' into main
- MetricRouter: Fix interval_timestamp option
- InfluxSink & InfluxAsyncSink: Add own flush mechanism
- InfluxSink & InfluxAsyncSink: Use default options, overwrite if
configured otherwise
- InfinibandCollector: Add units to metrics
- LikwidCollector: Fix for dying accessdaemon
2022-04-04 02:59:42 +02:00
Thomas Roehl
69f7c19659 InfluxAsyncSink: Add custom flush mechanism 2022-04-04 02:56:23 +02:00
Thomas Roehl
ecdb4c1bcf Add debug message when updating interval_timestep 2022-04-04 02:55:44 +02:00
Thomas Roehl
4d5b1adbc8 Fix for interval_timestamp option 2022-04-04 02:26:04 +02:00
Thomas Roehl
28348bd108 InfluxSink: Use batch&flush logic from HttpSink 2022-04-01 18:37:45 +02:00
Thomas Roehl
a3b9d8a90b HttpSink: Use sink name in error outputs 2022-04-01 18:36:54 +02:00
Thomas Roehl
7e43e9171e Use default options. Overwrite if anything is configured differently. Use seconds as precision 2022-04-01 17:26:56 +02:00
Thomas Roehl
5d25a7bf12 Add units to InfiniBandCollector 2022-04-01 17:14:26 +02:00
Thomas Roehl
83b4343310 Likwid receives signal at first Read, check when re-initializing 2022-04-01 17:10:31 +02:00
Thomas Roehl
f1d3cabdc6 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-04-01 12:45:25 +02:00
Thomas Röhl
4763733d8d Merge branch 'develop' into main 2022-03-31 11:57:19 +02:00
Thomas Gruber
2a014b6fba Read unit of values from /proc/meminfo (#68) 2022-03-31 11:56:31 +02:00
Thomas Röhl
16e898ecca Merge branch 'develop' into main 2022-03-31 11:47:02 +02:00
Thomas Roehl
50479f9325 Move all LIKWID related stuff to late initialization routine 2022-03-24 18:12:23 +01:00
Thomas Roehl
e0e91844bc Use late initialization of LIKWID and catch access daemon death. Fixes #70 and fixes #71. 2022-03-24 17:56:51 +01:00
Thomas Roehl
296225f3a8 Always export all metrics in NfsCollectors 2022-03-24 13:50:35 +01:00
Thomas Roehl
43bcce6fb5 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-22 18:05:38 +01:00
Thomas Roehl
622e94ae0e Fix DieList() if system does not support dies. Explicitly set entries in CpuData list 2022-03-22 15:58:10 +01:00
Thomas Roehl
c506114480 Add processing order to MetricRouter README and add missing options 2022-03-18 12:29:00 +01:00
Thomas Roehl
657543dded Ensure max_forward is at least 1 2022-03-18 12:28:52 +01:00
Thomas Roehl
4851382ad7 Merge branch 'develop' into main 2022-03-16 19:08:13 +01:00
Thomas Roehl
b66fdd1436 Add missing socket->thread_id map for LikwidCollector 2022-03-16 19:04:39 +01:00
Thomas Gruber
3f76947f54 Merge latest developments into main (#67)
* Update configuration.md

Add an additional receiver to have better alignment of components

* Change default GpfsCollector command to `mmpmon` (#53)

* Set default cmd to 'mmpmon'

* Reuse looked up path

* Cast const to string

* Just download LIKWID to get the headers (#54)

* Just download LIKWID to get the headers

* Remove perl-Data-Dumper from BuildRequires, only required by LIKWID build

* Add HttpReceiver as counterpart to the HttpSink (#49)

* Use GBytes as unit for large memory numbers

* Make maxForward configurable, save old name in meta in rename metrics and make the hostname tag key configurable

* Single release action (#55)

Building all RPMs and releasing in a single workflow

* Makefile target to build binary-only Debian packages (#61)

* Add 'install' and 'DEB' make targets to build binary-only Debian packages

* Add control file for DEB builds

* Use a single line for bash loop in make clean

* Add config options for retry intervals of InfluxDB clients (#59)

* Refactoring of LikwidCollector and metric units (#62)

* Reduce complexity of LikwidCollector and allow metric units

* Add unit to LikwidCollector docu and fix some typos

* Make library path configurable

* Use old metric name in Ganglia if rename has happened in the router (#60)

* Use old metric name if rename has happened in the router

* Also check for Ganglia renames for the oldname

* Derived metrics (#57)

* Add time-based derivatived (e.g. bandwidth) to some collectors

* Add documentation

* Add comments

* Fix: Only compute rates with a valid previous state

* Only compute rates with a valid previous state

* Define const values for net/dev fields

* Set default config values

* Add comments

* Refactor: Consolidate data structures

* Refactor: Consolidate data structures

* Refactor: Avoid struct deep copy

* Refactor: Avoid redundant tag maps

* Refactor: Use int64 type for absolut values

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>

* Simplified iota usage

* Move unit tag to meta data tags

* Derived metrics (#65)

* Add time-based derivatived (e.g. bandwidth) to some collectors

* Add documentation

* Add comments

* Fix: Only compute rates with a valid previous state

* Only compute rates with a valid previous state

* Define const values for net/dev fields

* Set default config values

* Add comments

* Refactor: Consolidate data structures

* Refactor: Consolidate data structures

* Refactor: Avoid struct deep copy

* Refactor: Avoid redundant tag maps

* Refactor: Use int64 type for absolut values

* Update LustreCollector

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>

* Meta to tags list and map for sinks (#63)

* Change ccMetric->Influx functions

* Use a meta_as_tags string list in config but create a lookup map afterwards

* Add meta as tag logic to sampleSink

* Fix staticcheck warnings (#66)

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2022-03-15 16:41:11 +01:00
Thomas Gruber
c182d295f4 Fix staticcheck warnings (#66) 2022-03-15 16:38:20 +01:00
Thomas Roehl
beebcd7145 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-15 16:16:39 +01:00
Thomas Gruber
57629a2e0a Meta to tags list and map for sinks (#63)
* Change ccMetric->Influx functions

* Use a meta_as_tags string list in config but create a lookup map afterwards

* Add meta as tag logic to sampleSink
2022-03-15 16:16:26 +01:00
Thomas Roehl
082eea525a Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-15 16:10:41 +01:00
Thomas Gruber
aa1afd745e Derived metrics (#65)
* Add time-based derivatived (e.g. bandwidth) to some collectors

* Add documentation

* Add comments

* Fix: Only compute rates with a valid previous state

* Only compute rates with a valid previous state

* Define const values for net/dev fields

* Set default config values

* Add comments

* Refactor: Consolidate data structures

* Refactor: Consolidate data structures

* Refactor: Avoid struct deep copy

* Refactor: Avoid redundant tag maps

* Refactor: Use int64 type for absolut values

* Update LustreCollector

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2022-03-15 16:09:47 +01:00
Thomas Roehl
2b8266d1d2 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-15 15:24:29 +01:00
Holger Obermaier
992b19d354 Move unit tag to meta data tags 2022-03-11 14:47:18 +01:00
Holger Obermaier
0b08ca9ae0 Simplified iota usage 2022-03-11 14:09:22 +01:00
Thomas Gruber
f6dae7c013 Derived metrics (#57)
* Add time-based derivatived (e.g. bandwidth) to some collectors

* Add documentation

* Add comments

* Fix: Only compute rates with a valid previous state

* Only compute rates with a valid previous state

* Define const values for net/dev fields

* Set default config values

* Add comments

* Refactor: Consolidate data structures

* Refactor: Consolidate data structures

* Refactor: Avoid struct deep copy

* Refactor: Avoid redundant tag maps

* Refactor: Use int64 type for absolut values

Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2022-03-11 13:48:18 +01:00
Thomas Gruber
1de3dda7be Use old metric name in Ganglia if rename has happened in the router (#60)
* Use old metric name if rename has happened in the router

* Also check for Ganglia renames for the oldname
2022-03-11 13:44:32 +01:00
Thomas Gruber
73f22c1041 Refactoring of LikwidCollector and metric units (#62)
* Reduce complexity of LikwidCollector and allow metric units

* Add unit to LikwidCollector docu and fix some typos

* Make library path configurable
2022-03-11 13:43:17 +01:00
Thomas Gruber
c9b8fcdaa7 Add config options for retry intervals of InfluxDB clients (#59) 2022-03-11 13:43:03 +01:00
Thomas Roehl
17f37583fc Use a single line for bash loop in make clean 2022-03-11 13:41:13 +01:00
Thomas Gruber
fb9ea992ea Makefile target to build binary-only Debian packages (#61)
* Add 'install' and 'DEB' make targets to build binary-only Debian packages

* Add control file for DEB builds
2022-03-11 13:39:09 +01:00
Thomas Gruber
21edca5f88 Single release action (#55)
Building all RPMs and releasing in a single workflow
2022-03-11 13:37:47 +01:00
Thomas Roehl
d835724d93 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-09 17:02:58 +01:00
Thomas Roehl
3cf2f69a07 Make maxForward configurable, save old name in meta in rename metrics and make the hostname tag key configurable 2022-03-09 11:23:17 +01:00
Thomas Roehl
e7f7e68095 Use GBytes as unit for large memory numbers 2022-03-09 11:05:26 +01:00
Thomas Roehl
c5082bbffe Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-07 16:13:08 +01:00
Thomas Gruber
c0e600269a Add HttpReceiver as counterpart to the HttpSink (#49) 2022-03-05 17:30:55 +01:00
Thomas Gruber
f2486abeab Just download LIKWID to get the headers (#54)
* Just download LIKWID to get the headers

* Remove perl-Data-Dumper from BuildRequires, only required by LIKWID build
2022-03-05 17:30:40 +01:00
Thomas Gruber
21864e0ac4 Change default GpfsCollector command to mmpmon (#53)
* Set default cmd to 'mmpmon'

* Reuse looked up path

* Cast const to string
2022-03-05 14:42:04 +01:00
Thomas Roehl
3157386b3e Merge branch 'develop' into main 2022-03-04 23:34:28 +01:00
Thomas Gruber
8c668fcc6f Update configuration.md
Add an additional receiver to have better alignment of components
2022-03-04 18:33:57 +01:00
Thomas Roehl
1961edc659 Add documentation to help configuring the CC metric collector 2022-03-04 15:42:25 +01:00
Mehmet Soysal
547bc0461f Beegfs collector (#50)
* added beegfs collectors to collectors/README.md

* added beegfs collectors and docs

* added new beegfs collectors to AvailableCollectors list

* Feedback implemented

* changed error type

* changed error to only return

* changed beegfs lookup path

* fixed typo in md files

Co-authored-by: Mehmet Soysal <mehmet.soysal@kit.edu>
2022-03-04 14:35:47 +01:00
Thomas Roehl
7f62975a68 Set proper user for files 2022-03-04 11:52:48 +01:00
Thomas Roehl
5e64c01c9d Fix name for ClusterCockpit user 2022-03-04 11:51:21 +01:00
Thomas Roehl
ff08eaeb43 Set proper user for files 2022-03-04 11:49:55 +01:00
Thomas Roehl
64c41be34c Fix name for ClusterCockpit 2022-03-04 11:37:45 +01:00
Thomas Roehl
f4af520b2a Fix error print in LustreCollector 2022-03-04 11:32:39 +01:00
Thomas Roehl
f1d2828e1d Fix error print in LustreCollector 2022-03-04 11:32:10 +01:00
Thomas Roehl
07af660e50 Merge branch 'main' into develop 2022-03-03 17:33:13 +01:00
Thomas Roehl
31f10b3163 Fix user creation type 2022-03-03 17:31:15 +01:00
Thomas Roehl
9ece27eec6 Use right systemd macro to create the user 2022-03-03 17:26:32 +01:00
Thomas Roehl
af4c468ab5 Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-03-03 17:24:43 +01:00
Thomas Roehl
b3030a8d44 Use right systemd macro to create the user 2022-03-03 17:24:32 +01:00
Holger Obermaier
db04c8fbae Removed infinibandPerfQueryMetric.go. infinibandMetric.go offers the same functionality without requiring root privileges. 2022-03-03 15:52:50 +01:00
Thomas Roehl
fdbdb79527 Merge branch 'develop' into main 2022-03-03 13:45:51 +01:00
Thomas Roehl
948c34d74d Add user creation in RPM 2022-03-03 13:43:43 +01:00
Thomas Roehl
60de21c41e Switch access mode of LikwidCollector in config file 2022-03-03 13:03:58 +01:00
Thomas Roehl
276c00442a Add option to LustreCollector to call lctl with sudo 2022-03-03 13:02:00 +01:00
Thomas Roehl
c61b8d2877 Set name change in Makefile 2022-03-03 11:03:51 +01:00
Thomas Roehl
6023abd028 Rename main file to match with executable name 2022-03-03 11:02:37 +01:00
Thomas Roehl
0753c81156 Add/Remove clustercockpit user and group in RPM 2022-03-02 15:10:14 +01:00
Thomas Roehl
092e7f6a71 Add section how to temporarly disable LIKWID access to page 2022-03-02 13:54:43 +01:00
Thomas Roehl
f7e8b52667 Run RPM build actions only on tag push 2022-03-02 13:21:54 +01:00
Thomas Roehl
02baef8c71 Merge branch 'develop' into main 2022-03-02 10:38:17 +01:00
Holger Obermaier
33d954f767 Action red hat universal base image (#52)
Add Red Hat Universal Base Image 8 Workflow
2022-03-01 17:15:20 +01:00
Holger Obermaier
a5325a6535 GitHub actions (#51)
Create new GitHub action which uses unmodified AlmaLinux Docker image
2022-03-01 15:39:26 +01:00
Thomas Roehl
d40163cf8f Update README and receiver-specific pages 2022-02-28 17:26:28 +01:00
Holger Obermaier
33fec95eac Additional comments 2022-02-28 12:16:48 +01:00
Holger Obermaier
2c08e53be4 Additional comments 2022-02-28 09:57:26 +01:00
Holger Obermaier
a2f9b23e85 Additional comments 2022-02-28 09:39:59 +01:00
Thomas Roehl
4c1263312b Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-02-25 15:06:06 +01:00
Thomas Gruber
d98076c792 Merge current development version into main (#48)
* DiskstatCollector: cast part_max_used metric to int

* Add uint types to GangliaSink and LibgangliaSink

* Use new sink instances to allow multiple of same sink type

* Update sink README and SampleSink

* Use new receiver instances to allow multiple of same receiver type

* Fix metric scope in likwid configuration script

* Mention likwid config script in LikwidCollector README

* Refactor: Embed Init() into New() function

* Refactor: Embed Init() into New() function

* Fix: MetricReceiver uses uninitialized values, when initialization fails

* Use Ganglia configuration (#44)

* Copy all metric configurations from original Ganglia code

* Use metric configurations from Ganglia for some metrics

* Format value string also for known metrics

* Numa-aware memstat collector (#45)

* Add samples for collectors, sinks and receivers

* Ping InfluxDB server after connecting to recognize faulty connections

* Add sink for Prometheus monitoring system (#46)

* Add sink for Prometheus monitoring system

* Add prometheus sink to README

* Add scraper for Prometheus clients (#47)

Co-authored-by: Holger Obermaier <holgerob@gmx.de>
Co-authored-by: Holger Obermaier <40787752+ho-ob@users.noreply.github.com>
2022-02-25 14:49:49 +01:00
Thomas Gruber
a203370aaa Add scraper for Prometheus clients (#47) 2022-02-25 14:46:29 +01:00
Thomas Gruber
f099a311a0 Add sink for Prometheus monitoring system (#46)
* Add sink for Prometheus monitoring system

* Add prometheus sink to README
2022-02-25 14:33:20 +01:00
Thomas Roehl
940623585c Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-02-25 13:53:34 +01:00
Thomas Roehl
fe3a8d59b0 Ping InfluxDB server after connecting to recognize faulty connections 2022-02-25 13:51:52 +01:00
Thomas Roehl
bac1f18b1d Add samples for collectors, sinks and receivers 2022-02-25 13:47:19 +01:00
Thomas Roehl
87ecb12c6f Merge branch 'develop' of github.com:ClusterCockpit/cc-metric-collector into develop 2022-02-24 18:28:52 +01:00
Thomas Gruber
c8bca59de4 Numa-aware memstat collector (#45) 2022-02-24 18:27:05 +01:00
Thomas Gruber
16c03d2aa2 Use Ganglia configuration (#44)
* Copy all metric configurations from original Ganglia code

* Use metric configurations from Ganglia for some metrics

* Format value string also for known metrics
2022-02-24 18:22:20 +01:00
Holger Obermaier
2f044f4b58 Fix: MetricReceiver uses uninitialized values, when initialization fails 2022-02-23 15:58:51 +01:00
Holger Obermaier
f911ff802c Merge pull request #43 from ClusterCockpit/new_receiver_instances
New receiver instances
2022-02-23 15:20:55 +01:00
Holger Obermaier
2f36375470 Refactor: Embed Init() into New() function 2022-02-23 15:15:17 +01:00
Holger Obermaier
6843902909 Merge pull request #41 from ClusterCockpit/new_sink_instances
New sink instances
2022-02-23 15:05:53 +01:00
Holger Obermaier
73981527d3 Refactor: Embed Init() into New() function 2022-02-23 14:56:29 +01:00
Thomas Roehl
d542f32baa Mention likwid config script in LikwidCollector README 2022-02-22 17:46:44 +01:00
Thomas Roehl
6b6566b0aa Fix metric scope in likwid configuration script 2022-02-22 17:46:17 +01:00
Thomas Roehl
3598aed090 Use new receiver instances to allow multiple of same receiver type 2022-02-22 16:33:38 +01:00
Thomas Roehl
24e12ccc57 Update sink README and SampleSink 2022-02-22 16:19:46 +01:00
Thomas Roehl
18a226183c Use new sink instances to allow multiple of same sink type 2022-02-22 16:15:25 +01:00
Thomas Roehl
9cfbe10247 Add uint types to GangliaSink and LibgangliaSink 2022-02-22 15:51:08 +01:00
Thomas Roehl
66275ecf74 DiskstatCollector: cast part_max_used metric to int 2022-02-22 15:50:49 +01:00
Thomas Gruber
b4cc6d54ea Update README.md 2022-02-22 15:10:27 +01:00
Thomas Gruber
45714fe337 Update README.md 2022-02-22 15:09:12 +01:00
Holger Obermaier
a97c705f4c Do not create link to libganglia.so.
libganglia.so is now loaded during runtime by dlopen
and no longer required during link time
2022-02-21 20:55:14 +01:00
Thomas Roehl
ae64eddcc8 Remove doubled import 2022-02-21 14:50:53 +01:00
119 changed files with 8618 additions and 5379 deletions

View File

@@ -1,8 +1,10 @@
{
"sinks": ".github/ci-sinks.json",
"collectors" : ".github/ci-collectors.json",
"receivers" : ".github/ci-receivers.json",
"router" : ".github/ci-router.json",
"interval": 5,
"duration": 1
"sinks-file": ".github/ci-sinks.json",
"collectors-file" : ".github/ci-collectors.json",
"receivers-file" : ".github/ci-receivers.json",
"router-file" : ".github/ci-router.json",
"main" : {
"interval": "5s",
"duration": "1s"
}
}

View File

@@ -1,6 +1,8 @@
{
"testoutput" : {
"type" : "stdout",
"meta_as_tags" : true
"meta_as_tags" : [
"unit"
]
}
}

504
.github/workflows/Release.yml vendored Normal file
View File

@@ -0,0 +1,504 @@
# See: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions
# Workflow name
name: Release
# Run on tag push
on:
push:
tags:
- '**'
workflow_dispatch:
jobs:
#
# Build on AlmaLinux 8 using go-toolset
#
AlmaLinux8-RPM-build:
runs-on: ubuntu-latest
# See: https://hub.docker.com/_/almalinux
container: almalinux:8
# The job outputs link to the outputs of the 'rpmrename' step
# Only job outputs can be used in child jobs
outputs:
rpm : ${{steps.rpmrename.outputs.RPM}}
srpm : ${{steps.rpmrename.outputs.SRPM}}
steps:
# Use dnf to install development packages
- name: Install development packages
run: |
dnf --assumeyes group install "Development Tools" "RPM Development Tools"
dnf --assumeyes install wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
# AlmaLinux 8 is a derivate of RedHat Enterprise Linux 8 (UBI8),
# so the created RPM both contain the substring 'el8' in the RPM file names
# This step replaces the substring 'el8' to 'alma8'. It uses the move operation
# because it is unclear whether the default AlmaLinux 8 container contains the
# 'rename' command. This way we also get the new names for output.
- name: Rename RPMs (s/el8/alma8/)
id: rpmrename
run: |
OLD_RPM="${{steps.rpmbuild.outputs.RPM}}"
OLD_SRPM="${{steps.rpmbuild.outputs.SRPM}}"
NEW_RPM="${OLD_RPM/el8/alma8}"
NEW_SRPM=${OLD_SRPM/el8/alma8}
mv "${OLD_RPM}" "${NEW_RPM}"
mv "${OLD_SRPM}" "${NEW_SRPM}"
echo "SRPM=${NEW_SRPM}" >> $GITHUB_OUTPUT
echo "RPM=${NEW_RPM}" >> $GITHUB_OUTPUT
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector RPM for AlmaLinux 8
path: ${{ steps.rpmrename.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector SRPM for AlmaLinux 8
path: ${{ steps.rpmrename.outputs.SRPM }}
overwrite: true
#
# Build on AlmaLinux 9 using go-toolset
#
AlmaLinux9-RPM-build:
runs-on: ubuntu-latest
# See: https://hub.docker.com/_/almalinux
container: almalinux:9
# The job outputs link to the outputs of the 'rpmrename' step
# Only job outputs can be used in child jobs
outputs:
rpm : ${{steps.rpmrename.outputs.RPM}}
srpm : ${{steps.rpmrename.outputs.SRPM}}
steps:
# Use dnf to install development packages
- name: Install development packages
run: |
dnf --assumeyes group install "Development Tools" "RPM Development Tools"
dnf --assumeyes install wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
# AlmaLinux 9 is a derivate of RedHat Enterprise Linux 8 (UBI8),
# so the created RPM both contain the substring 'el9' in the RPM file names
# This step replaces the substring 'el8' to 'alma8'. It uses the move operation
# because it is unclear whether the default AlmaLinux 8 container contains the
# 'rename' command. This way we also get the new names for output.
- name: Rename RPMs (s/el9/alma9/)
id: rpmrename
run: |
OLD_RPM="${{steps.rpmbuild.outputs.RPM}}"
OLD_SRPM="${{steps.rpmbuild.outputs.SRPM}}"
NEW_RPM="${OLD_RPM/el9/alma9}"
NEW_SRPM=${OLD_SRPM/el9/alma9}
mv "${OLD_RPM}" "${NEW_RPM}"
mv "${OLD_SRPM}" "${NEW_SRPM}"
echo "SRPM=${NEW_SRPM}" >> $GITHUB_OUTPUT
echo "RPM=${NEW_RPM}" >> $GITHUB_OUTPUT
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector RPM for AlmaLinux 9
path: ${{ steps.rpmrename.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector SRPM for AlmaLinux 9
path: ${{ steps.rpmrename.outputs.SRPM }}
overwrite: true
#
# Build on UBI 8 using go-toolset
#
UBI-8-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c35984d70cc534b3a3784e?container-tabs=gti
container: registry.access.redhat.com/ubi8/ubi:8.8-1032.1692772289
# The job outputs link to the outputs of the 'rpmbuild' step
outputs:
rpm : ${{steps.rpmbuild.outputs.RPM}}
srpm : ${{steps.rpmbuild.outputs.SRPM}}
steps:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros rpm-build-libs rpm-libs gcc make python38 git wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector RPM for UBI 8
path: ${{ steps.rpmbuild.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector SRPM for UBI 8
path: ${{ steps.rpmbuild.outputs.SRPM }}
overwrite: true
#
# Build on UBI 9 using go-toolset
#
UBI-9-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
container: redhat/ubi9
# The job outputs link to the outputs of the 'rpmbuild' step
# The job outputs link to the outputs of the 'rpmbuild' step
outputs:
rpm : ${{steps.rpmbuild.outputs.RPM}}
srpm : ${{steps.rpmbuild.outputs.SRPM}}
steps:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros gcc make python39 git wget openssl-devel diffutils delve
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector RPM for UBI 9
path: ${{ steps.rpmbuild.outputs.RPM }}
overwrite: true
- name: Save SRPM as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector SRPM for UBI 9
path: ${{ steps.rpmbuild.outputs.SRPM }}
overwrite: true
#
# Build on Ubuntu 22.04 using official go package
#
Ubuntu-jammy-build:
runs-on: ubuntu-latest
container: ubuntu:22.04
# The job outputs link to the outputs of the 'debrename' step
# Only job outputs can be used in child jobs
outputs:
deb : ${{steps.debrename.outputs.DEB}}
steps:
# Use apt to install development packages
- name: Install development packages
run: |
apt update && apt --assume-yes upgrade
apt --assume-yes install build-essential sed git wget bash
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: DEB build MetricCollector
id: dpkg-build
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make DEB
- name: Rename DEB (add '_ubuntu22.04')
id: debrename
run: |
OLD_DEB_NAME=$(echo "${{steps.dpkg-build.outputs.DEB}}" | rev | cut -d '.' -f 2- | rev)
NEW_DEB_FILE="${OLD_DEB_NAME}_ubuntu22.04.deb"
mv "${{steps.dpkg-build.outputs.DEB}}" "${NEW_DEB_FILE}"
echo "DEB=${NEW_DEB_FILE}" >> $GITHUB_OUTPUT
# See: https://github.com/actions/upload-artifact
- name: Save DEB as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector DEB for Ubuntu 22.04
path: ${{ steps.debrename.outputs.DEB }}
overwrite: true
#
# Build on Ubuntu 24.04 using official go package
#
Ubuntu-noblenumbat-build:
runs-on: ubuntu-latest
container: ubuntu:24.04
# The job outputs link to the outputs of the 'debrename' step
# Only job outputs can be used in child jobs
outputs:
deb : ${{steps.debrename.outputs.DEB}}
steps:
# Use apt to install development packages
- name: Install development packages
run: |
apt update && apt --assume-yes upgrade
apt --assume-yes install build-essential sed git wget bash
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: DEB build MetricCollector
id: dpkg-build
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make DEB
- name: Rename DEB (add '_ubuntu24.04')
id: debrename
run: |
OLD_DEB_NAME=$(echo "${{steps.dpkg-build.outputs.DEB}}" | rev | cut -d '.' -f 2- | rev)
NEW_DEB_FILE="${OLD_DEB_NAME}_ubuntu24.04.deb"
mv "${{steps.dpkg-build.outputs.DEB}}" "${NEW_DEB_FILE}"
echo "DEB=${NEW_DEB_FILE}" >> $GITHUB_OUTPUT
# See: https://github.com/actions/upload-artifact
- name: Save DEB as artifact
uses: actions/upload-artifact@v4
with:
name: cc-metric-collector DEB for Ubuntu 24.04
path: ${{ steps.debrename.outputs.DEB }}
overwrite: true
#
# Create release with fresh RPMs
#
Release:
runs-on: ubuntu-latest
# We need the RPMs, so add dependency
needs: [AlmaLinux8-RPM-build, AlmaLinux9-RPM-build, UBI-8-RPM-build, UBI-9-RPM-build, Ubuntu-jammy-build, Ubuntu-noblenumbat-build]
steps:
# See: https://github.com/actions/download-artifact
- name: Download AlmaLinux 8 RPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector RPM for AlmaLinux 8
- name: Download AlmaLinux 8 SRPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector SRPM for AlmaLinux 8
- name: Download AlmaLinux 9 RPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector RPM for AlmaLinux 9
- name: Download AlmaLinux 9 SRPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector SRPM for AlmaLinux 9
- name: Download UBI 8 RPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector RPM for UBI 8
- name: Download UBI 8 SRPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector SRPM for UBI 8
- name: Download UBI 9 RPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector RPM for UBI 9
- name: Download UBI 9 SRPM
uses: actions/download-artifact@v4
with:
name: cc-metric-collector SRPM for UBI 9
- name: Download Ubuntu 22.04 DEB
uses: actions/download-artifact@v4
with:
name: cc-metric-collector DEB for Ubuntu 22.04
- name: Download Ubuntu 24.04 DEB
uses: actions/download-artifact@v4
with:
name: cc-metric-collector DEB for Ubuntu 24.04
# The download actions do not publish the name of the downloaded file,
# so we re-use the job outputs of the parent jobs. The files are all
# downloaded to the current folder.
# The gh-release action afterwards does not accept file lists but all
# files have to be listed at 'files'. The step creates one output per
# RPM package (2 per distro)
- name: Set RPM variables
id: files
run: |
ALMA_8_RPM=$(basename "${{ needs.AlmaLinux8-RPM-build.outputs.rpm}}")
ALMA_8_SRPM=$(basename "${{ needs.AlmaLinux8-RPM-build.outputs.srpm}}")
ALMA_9_RPM=$(basename "${{ needs.AlmaLinux9-RPM-build.outputs.rpm}}")
ALMA_9_SRPM=$(basename "${{ needs.AlmaLinux9-RPM-build.outputs.srpm}}")
UBI_8_RPM=$(basename "${{ needs.UBI-8-RPM-build.outputs.rpm}}")
UBI_8_SRPM=$(basename "${{ needs.UBI-8-RPM-build.outputs.srpm}}")
UBI_9_RPM=$(basename "${{ needs.UBI-9-RPM-build.outputs.rpm}}")
UBI_9_SRPM=$(basename "${{ needs.UBI-9-RPM-build.outputs.srpm}}")
U_2204_DEB=$(basename "${{ needs.Ubuntu-jammy-build.outputs.deb}}")
U_2404_DEB=$(basename "${{ needs.Ubuntu-noblenumbat-build.outputs.deb}}")
echo "ALMA_8_RPM::${ALMA_8_RPM}"
echo "ALMA_8_SRPM::${ALMA_8_SRPM}"
echo "ALMA_9_RPM::${ALMA_9_RPM}"
echo "ALMA_9_SRPM::${ALMA_9_SRPM}"
echo "UBI_8_RPM::${UBI_8_RPM}"
echo "UBI_8_SRPM::${UBI_8_SRPM}"
echo "UBI_9_RPM::${UBI_9_RPM}"
echo "UBI_9_SRPM::${UBI_9_SRPM}"
echo "U_2204_DEB::${U_2204_DEB}"
echo "U_2404_DEB::${U_2404_DEB}"
echo "ALMA_8_RPM=${ALMA_8_RPM}" >> $GITHUB_OUTPUT
echo "ALMA_8_SRPM=${ALMA_8_SRPM}" >> $GITHUB_OUTPUT
echo "ALMA_9_RPM=${ALMA_9_RPM}" >> $GITHUB_OUTPUT
echo "ALMA_9_SRPM=${ALMA_9_SRPM}" >> $GITHUB_OUTPUT
echo "UBI_8_RPM=${UBI_8_RPM}" >> $GITHUB_OUTPUT
echo "UBI_8_SRPM=${UBI_8_SRPM}" >> $GITHUB_OUTPUT
echo "UBI_9_RPM=${UBI_9_RPM}" >> $GITHUB_OUTPUT
echo "UBI_9_SRPM=${UBI_9_SRPM}" >> $GITHUB_OUTPUT
echo "U_2204_DEB=${U_2204_DEB}" >> $GITHUB_OUTPUT
echo "U_2404_DEB=${U_2404_DEB}" >> $GITHUB_OUTPUT
# See: https://github.com/softprops/action-gh-release
- name: Release
uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/')
with:
name: cc-metric-collector-${{github.ref_name}}
files: |
${{ steps.files.outputs.ALMA_8_RPM }}
${{ steps.files.outputs.ALMA_8_SRPM }}
${{ steps.files.outputs.ALMA_9_RPM }}
${{ steps.files.outputs.ALMA_9_SRPM }}
${{ steps.files.outputs.UBI_8_RPM }}
${{ steps.files.outputs.UBI_8_SRPM }}
${{ steps.files.outputs.UBI_9_RPM }}
${{ steps.files.outputs.UBI_9_SRPM }}
${{ steps.files.outputs.U_2204_DEB }}
${{ steps.files.outputs.U_2404_DEB }}

View File

@@ -1,58 +0,0 @@
name: Run RPM Build
on:
push:
tags:
- '**'
jobs:
build-alma-8_5:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: TomTheBear/rpmbuild@alma8.5
id: rpm
name: Build RPM package on AlmaLinux 8.5
with:
spec_file: "./scripts/cc-metric-collector.spec"
- name: Save RPM as artifact
uses: actions/upload-artifact@v1.0.0
with:
name: cc-metric-collector RPM AlmaLinux 8.5
path: ${{ steps.rpm.outputs.rpm_path }}
- name: Save SRPM as artifact
uses: actions/upload-artifact@v1.0.0
with:
name: cc-metric-collector SRPM AlmaLinux 8.5
path: ${{ steps.rpm.outputs.source_rpm_path }}
- name: Release
uses: softprops/action-gh-release@v1
with:
name: cc-metric-collector-${{github.ref_name}}
files: |
${{ steps.rpm.outputs.source_rpm_path }}
${{ steps.rpm.outputs.rpm_path }}
build-rhel-ubi8:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: TomTheBear/rpmbuild@rh-ubi8
id: rpm
name: Build RPM package on Red Hat Universal Base Image 8
with:
spec_file: "./scripts/cc-metric-collector.spec"
- name: Save RPM as artifact
uses: actions/upload-artifact@v1.0.0
with:
name: cc-metric-collector RPM Red Hat Universal Base Image 8
path: ${{ steps.rpm.outputs.rpm_path }}
- name: Save SRPM as artifact
uses: actions/upload-artifact@v1.0.0
with:
name: cc-metric-collector SRPM Red Hat Universal Base Image 8
path: ${{ steps.rpm.outputs.source_rpm_path }}
- name: Release
uses: softprops/action-gh-release@v1
with:
files: |
${{ steps.rpm.outputs.source_rpm_path }}
${{ steps.rpm.outputs.rpm_path }}

View File

@@ -1,46 +1,283 @@
# See: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions
# Workflow name
name: Run Test
on: push
# Run on event push
on:
push:
workflow_dispatch:
jobs:
build-1-17:
#
# Job build-latest
# Build on latest Ubuntu using latest golang version
#
build-latest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# See: https://github.com/marketplace/actions/checkout
# Checkout git repository and submodules
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v2.1.5
uses: actions/setup-go@v5
with:
go-version: '^1.17.6'
- name: Setup Ganglia
run: sudo apt install ganglia-monitor libganglia1
go-version: '1.21'
check-latest: true
- name: Build MetricCollector
run: make
- name: Run MetricCollector
- name: Run MetricCollector once
run: ./cc-metric-collector --once --config .github/ci-config.json
build-1-16:
#
# Build on AlmaLinux 8
#
AlmaLinux8-RPM-build:
runs-on: ubuntu-latest
# See: https://hub.docker.com/_/almalinux
container: almalinux:8
# The job outputs link to the outputs of the 'rpmrename' step
# Only job outputs can be used in child jobs
steps:
- uses: actions/checkout@v2
# Use dnf to install development packages
- name: Install development packages
run: |
dnf --assumeyes group install "Development Tools" "RPM Development Tools"
dnf --assumeyes install wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
uses: actions/setup-go@v2.1.5
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
#
# Build on AlmaLinux 9
#
AlmaLinux9-RPM-build:
runs-on: ubuntu-latest
# See: https://hub.docker.com/_/almalinux
container: almalinux:9
# The job outputs link to the outputs of the 'rpmrename' step
# Only job outputs can be used in child jobs
steps:
# Use dnf to install development packages
- name: Install development packages
run: |
dnf --assumeyes group install "Development Tools" "RPM Development Tools"
dnf --assumeyes install wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
go-version: '^1.16.7' # The version AlmaLinux 8.5 uses
submodules: recursive
fetch-depth: 0
- name: Setup Ganglia
run: sudo apt install ganglia-monitor libganglia1
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
- name: Build MetricCollector
run: make
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
- name: Run MetricCollectorlibganglia1
run: ./cc-metric-collector --once --config .github/ci-config.json
#
# Build on UBI 8 using go-toolset
#
UBI-8-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
container: redhat/ubi8
# The job outputs link to the outputs of the 'rpmbuild' step
steps:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros rpm-build-libs rpm-libs gcc make python38 git wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/go-toolset-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-bin-1.22.9-1.module_el8.10.0+3938+8c723e16.x86_64.rpm \
https://repo.almalinux.org/almalinux/8/AppStream/x86_64/os/Packages/golang-src-1.22.9-1.module_el8.10.0+3938+8c723e16.noarch.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
#
# Build on UBI 9 using go-toolset
#
UBI-9-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
container: redhat/ubi9
# The job outputs link to the outputs of the 'rpmbuild' step
steps:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros gcc make python39 git wget openssl-devel diffutils delve
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# See: https://github.com/marketplace/actions/setup-go-environment
# - name: Setup Golang
# uses: actions/setup-go@v5
# with:
# go-version: 'stable'
- name: Setup Golang
run: |
dnf --assumeyes --disableplugin=subscription-manager install \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/go-toolset-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-bin-1.22.7-2.el9_5.x86_64.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-src-1.22.7-2.el9_5.noarch.rpm \
https://repo.almalinux.org/almalinux/9/AppStream/x86_64/os/Packages/golang-race-1.22.7-2.el9_5.x86_64.rpm
- name: RPM build MetricCollector
id: rpmbuild
run: |
git config --global --add safe.directory /__w/cc-metric-collector/cc-metric-collector
make RPM
#
# Build on Ubuntu 22.04 using official go package
#
Ubuntu-jammy-build:
runs-on: ubuntu-latest
container: ubuntu:22.04
steps:
# Use apt to install development packages
- name: Install development packages
run: |
apt update && apt --assume-yes upgrade
apt --assume-yes install build-essential sed git wget bash
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: DEB build MetricCollector
id: dpkg-build
run: |
export PATH=/usr/local/go/bin:/usr/local/go/pkg/tool/linux_amd64:$PATH
make DEB
#
# Build on Ubuntu 24.04 using official go package
#
Ubuntu-noblenumbat-build:
runs-on: ubuntu-latest
container: ubuntu:24.04
steps:
# Use apt to install development packages
- name: Install development packages
run: |
apt update && apt --assume-yes upgrade
apt --assume-yes install build-essential sed git wget bash
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
# See: https://github.com/marketplace/actions/setup-go-environment
- name: Setup Golang
uses: actions/setup-go@v5
with:
go-version: 'stable'
- name: DEB build MetricCollector
id: dpkg-build
run: |
export PATH=/usr/local/go/bin:/usr/local/go/pkg/tool/linux_amd64:$PATH
make DEB

29
.zenodo.json Normal file
View File

@@ -0,0 +1,29 @@
{
"title": "cc-metric-collector",
"description": "Monitoring agent for ClusterCockpit.",
"creators": [
{
"affiliation": "Regionales Rechenzentrum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg",
"name": "Thomas Gruber",
"orcid": "0000-0001-5560-6964"
},
{
"affiliation": "Steinbuch Centre for Computing, Karlsruher Institut für Technologie",
"name": "Holger Obermaier",
"orcid": "0000-0002-6830-6626"
}
],
"upload_type": "software",
"license": "MIT",
"access_right": "open",
"keywords": [
"performance-monitoring",
"cluster-monitoring",
"open-source"
],
"communities": [
{
"identifier": "clustercockpit"
}
]
}

104
Makefile
View File

@@ -1,5 +1,5 @@
APP = cc-metric-collector
GOSRC_APP := metric-collector.go
GOSRC_APP := cc-metric-collector.go
GOSRC_COLLECTORS := $(wildcard collectors/*.go)
GOSRC_SINKS := $(wildcard sinks/*.go)
GOSRC_RECEIVERS := $(wildcard receivers/*.go)
@@ -15,39 +15,115 @@ COMPONENT_DIRS := collectors \
internal/ccTopology \
internal/multiChanTicker
BINDIR = bin
GOBIN = $(shell which go)
.PHONY: all
all: $(APP)
$(APP): $(GOSRC)
$(APP): $(GOSRC) go.mod
make -C collectors
make -C sinks
go get
go build -o $(APP) $(GOSRC_APP)
$(GOBIN) get
$(GOBIN) build -o $(APP) $(GOSRC_APP)
install: $(APP)
@WORKSPACE=$(PREFIX)
@if [ -z "$${WORKSPACE}" ]; then exit 1; fi
@mkdir --parents --verbose $${WORKSPACE}/usr/$(BINDIR)
@install -Dpm 755 $(APP) $${WORKSPACE}/usr/$(BINDIR)/$(APP)
@mkdir --parents --verbose $${WORKSPACE}/etc/cc-metric-collector $${WORKSPACE}/etc/default $${WORKSPACE}/etc/systemd/system $${WORKSPACE}/etc/init.d
@install -Dpm 600 config.json $${WORKSPACE}/etc/cc-metric-collector/cc-metric-collector.json
@sed -i -e s+"\"./"+"\"/etc/cc-metric-collector/"+g $${WORKSPACE}/etc/cc-metric-collector/cc-metric-collector.json
@install -Dpm 600 sinks.json $${WORKSPACE}/etc/cc-metric-collector/sinks.json
@install -Dpm 600 collectors.json $${WORKSPACE}/etc/cc-metric-collector/collectors.json
@install -Dpm 600 router.json $${WORKSPACE}/etc/cc-metric-collector/router.json
@install -Dpm 600 receivers.json $${WORKSPACE}/etc/cc-metric-collector/receivers.json
@install -Dpm 600 scripts/cc-metric-collector.config $${WORKSPACE}/etc/default/cc-metric-collector
@install -Dpm 644 scripts/cc-metric-collector.service $${WORKSPACE}/etc/systemd/system/cc-metric-collector.service
@install -Dpm 644 scripts/cc-metric-collector.init $${WORKSPACE}/etc/init.d/cc-metric-collector
.PHONY: clean
.ONESHELL:
clean:
@for COMP in $(COMPONENT_DIRS); do if [ -e $$COMP/Makefile ]; then make -C $$COMP clean; fi; done
@for COMP in $(COMPONENT_DIRS); do if [ -e $$COMP/Makefile ]; then make -C $$COMP clean; fi; done
rm -f $(APP)
.PHONY: fmt
fmt:
go fmt $(GOSRC_COLLECTORS)
go fmt $(GOSRC_SINKS)
go fmt $(GOSRC_RECEIVERS)
go fmt $(GOSRC_APP)
@for F in $(GOSRC_INTERNAL); do go fmt $$F; done
$(GOBIN) fmt $(GOSRC_COLLECTORS)
$(GOBIN) fmt $(GOSRC_SINKS)
$(GOBIN) fmt $(GOSRC_RECEIVERS)
$(GOBIN) fmt $(GOSRC_APP)
@for F in $(GOSRC_INTERNAL); do $(GOBIN) fmt $$F; done
# Examine Go source code and reports suspicious constructs
.PHONY: vet
vet:
go vet ./...
$(GOBIN) vet ./...
# Run linter for the Go programming language.
# Using static analysis, it finds bugs and performance issues, offers simplifications, and enforces style rules
.PHONY: staticcheck
staticcheck:
go install honnef.co/go/tools/cmd/staticcheck@latest
$$(go env GOPATH)/bin/staticcheck ./...
$(GOBIN) install honnef.co/go/tools/cmd/staticcheck@latest
$$($(GOBIN) env GOPATH)/bin/staticcheck ./...
.ONESHELL:
.PHONY: RPM
RPM: scripts/cc-metric-collector.spec
@WORKSPACE="$${PWD}"
@SPECFILE="$${WORKSPACE}/scripts/cc-metric-collector.spec"
# Setup RPM build tree
@eval $$(rpm --eval "ARCH='%{_arch}' RPMDIR='%{_rpmdir}' SOURCEDIR='%{_sourcedir}' SPECDIR='%{_specdir}' SRPMDIR='%{_srcrpmdir}' BUILDDIR='%{_builddir}'")
@mkdir --parents --verbose "$${RPMDIR}" "$${SOURCEDIR}" "$${SPECDIR}" "$${SRPMDIR}" "$${BUILDDIR}"
# Create source tarball
@COMMITISH="HEAD"
@VERS=$$(git describe --tags $${COMMITISH})
@VERS=$${VERS#v}
@VERS=$$(echo $${VERS} | sed -e s+'-'+'_'+g)
@eval $$(rpmspec --query --queryformat "NAME='%{name}' VERSION='%{version}' RELEASE='%{release}' NVR='%{NVR}' NVRA='%{NVRA}'" --define="VERS $${VERS}" "$${SPECFILE}")
@PREFIX="$${NAME}-$${VERSION}"
@FORMAT="tar.gz"
@SRCFILE="$${SOURCEDIR}/$${PREFIX}.$${FORMAT}"
@git archive --verbose --format "$${FORMAT}" --prefix="$${PREFIX}/" --output="$${SRCFILE}" $${COMMITISH}
# Build RPM and SRPM
@rpmbuild -ba --define="VERS $${VERS}" --rmsource --clean "$${SPECFILE}"
# Report RPMs and SRPMs when in GitHub Workflow
@if [[ "$${GITHUB_ACTIONS}" == true ]]; then
@ RPMFILE="$${RPMDIR}/$${ARCH}/$${NVRA}.rpm"
@ SRPMFILE="$${SRPMDIR}/$${NVR}.src.rpm"
@ echo "SRPM=$${SRPMFILE}" >> $${GITHUB_OUTPUT}
@ echo "RPM=$${RPMFILE}" >> $${GITHUB_OUTPUT}
@fi
.PHONY: DEB
DEB: scripts/cc-metric-collector.deb.control $(APP)
@BASEDIR=$${PWD}
@WORKSPACE=$${PWD}/.dpkgbuild
@DEBIANDIR=$${WORKSPACE}/debian
@DEBIANBINDIR=$${WORKSPACE}/DEBIAN
@mkdir --parents --verbose $${WORKSPACE} $${DEBIANBINDIR}
#@mkdir --parents --verbose $$DEBIANDIR
@CONTROLFILE="$${BASEDIR}/scripts/cc-metric-collector.deb.control"
@COMMITISH="HEAD"
@VERS=$$(git describe --tags --abbrev=0 $${COMMITISH})
@if [ -z "$${VERS}" ]; then VERS=${GITHUB_REF_NAME}; fi
@VERS=$${VERS#v}
@ARCH=$$(uname -m)
@ARCH=$$(echo $${ARCH} | sed -e s+'_'+'-'+g)
@if [ "$${ARCH}" = "x86-64" ]; then ARCH=amd64; fi
@PREFIX="$${NAME}-$${VERSION}_$${ARCH}"
@SIZE_BYTES=$$(du -bcs --exclude=.dpkgbuild "$${WORKSPACE}"/ | awk '{print $$1}' | head -1 | sed -e 's/^0\+//')
@SIZE="$$(awk -v size="$${SIZE_BYTES}" 'BEGIN {print (size/1024)+1}' | awk '{print int($$0)}')"
@sed -e s+"{VERSION}"+"$${VERS}"+g -e s+"{INSTALLED_SIZE}"+"$${SIZE}"+g -e s+"{ARCH}"+"$${ARCH}"+g $${CONTROLFILE} > $${DEBIANBINDIR}/control
@make PREFIX=$${WORKSPACE} install
@DEB_FILE="cc-metric-collector_$${VERS}_$${ARCH}.deb"
@dpkg-deb -b $${WORKSPACE} "$${DEB_FILE}"
@if [ "$${GITHUB_ACTIONS}" = "true" ]; then
@ echo "DEB=$${DEB_FILE}" >> $${GITHUB_OUTPUT}
@fi
@rm -r "$${WORKSPACE}"

View File

@@ -1,5 +1,6 @@
# cc-metric-collector
A node agent for measuring, processing and forwarding node level metrics. It is part of the ClusterCockpit ecosystem.
A node agent for measuring, processing and forwarding node level metrics. It is part of the [ClusterCockpit ecosystem](./docs/introduction.md).
The metric collector sends (and receives) metric in the [InfluxDB line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/) as it provides flexibility while providing a separation between tags (like index columns in relational databases) and fields (like data columns).
@@ -7,10 +8,14 @@ There is a single timer loop that triggers all collectors serially, collects the
The receiver runs as a go routine side-by-side with the timer loop and asynchronously forwards received metrics to the sink.
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7438287.svg)](https://doi.org/10.5281/zenodo.7438287)
# Configuration
Configuration is implemented using a single json document that is distributed over network and may be persisted as file.
Supported metrics are documented [here](https://github.com/ClusterCockpit/cc-specifications/blob/master/metrics/lineprotocol_alternative.md).
Supported metrics are documented [here](https://github.com/ClusterCockpit/cc-specifications/blob/master/interfaces/lineprotocol/README.md).
There is a main configuration file with basic settings that point to the other configuration files for the different components.
@@ -20,20 +25,20 @@ There is a main configuration file with basic settings that point to the other c
"collectors" : "collectors.json",
"receivers" : "receivers.json",
"router" : "router.json",
"interval": 10,
"duration": 1
"interval": "10s",
"duration": "1s"
}
```
The `interval` defines how often the metrics should be read and send to the sink. The `duration` tells collectors how long one measurement has to take. This is important for some collectors, like the `likwid` collector.
The `interval` defines how often the metrics should be read and send to the sink. The `duration` tells collectors how long one measurement has to take. This is important for some collectors, like the `likwid` collector. For more information, see [here](./docs/configuration.md).
See the component READMEs for their configuration:
* [`collectors`](./collectors/README.md)
* [`sinks`](./sinks/README.md)
* [`receivers`](./receivers/README.md)
* [`router`](./internal/metricRouter/README.md)
# Installation
```
@@ -43,6 +48,7 @@ $ go get (requires at least golang 1.16)
$ make
```
For more information, see [here](./docs/building.md).
# Running
@@ -57,13 +63,49 @@ Usage of metric-collector:
Run all collectors only once
```
# Scenarios
The metric collector was designed with flexibility in mind, so it can be used in many scenarios. Here are a few:
```mermaid
flowchart TD
subgraph a ["Cluster A"]
nodeA[NodeA with CC collector]
nodeB[NodeB with CC collector]
nodeC[NodeC with CC collector]
end
a --> db[(Database)]
db <--> ccweb("Webfrontend")
```
``` mermaid
flowchart TD
subgraph a [ClusterA]
direction LR
nodeA[NodeA with CC collector]
nodeB[NodeB with CC collector]
nodeC[NodeC with CC collector]
end
subgraph b [ClusterB]
direction LR
nodeD[NodeD with CC collector]
nodeE[NodeE with CC collector]
nodeF[NodeF with CC collector]
end
a --> ccrecv{"CC collector as receiver"}
b --> ccrecv
ccrecv --> db[("Database1")]
ccrecv -.-> db2[("Database2")]
db <-.-> ccweb("Webfrontend")
```
# Contributing
The ClusterCockpit ecosystem is designed to be used by different HPC computing centers. Since configurations and setups differ between the centers, the centers likely have to put some work into the cc-metric-collector to gather all desired metrics.
You are free to open an issue to request a collector but we would also be happy about PRs.
# Contact
# Contact
* [Matrix.org ClusterCockpit General chat](https://matrix.to/#/#clustercockpit-dev:matrix.org)
* [Matrix.org ClusterCockpit Development chat](https://matrix.to/#/#clustercockpit:matrix.org)

View File

@@ -7,39 +7,24 @@ import (
"os/signal"
"syscall"
"github.com/ClusterCockpit/cc-lib/receivers"
"github.com/ClusterCockpit/cc-lib/sinks"
"github.com/ClusterCockpit/cc-metric-collector/collectors"
"github.com/ClusterCockpit/cc-metric-collector/receivers"
"github.com/ClusterCockpit/cc-metric-collector/sinks"
// "strings"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
ccconf "github.com/ClusterCockpit/cc-lib/ccConfig"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
mr "github.com/ClusterCockpit/cc-metric-collector/internal/metricRouter"
mct "github.com/ClusterCockpit/cc-metric-collector/internal/multiChanTicker"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
type CentralConfigFile struct {
Interval int `json:"interval"`
Duration int `json:"duration"`
CollectorConfigFile string `json:"collectors"`
RouterConfigFile string `json:"router"`
SinkConfigFile string `json:"sinks"`
ReceiverConfigFile string `json:"receivers,omitempty"`
}
func LoadCentralConfiguration(file string, config *CentralConfigFile) error {
configFile, err := os.Open(file)
if err != nil {
cclog.Error(err.Error())
return err
}
defer configFile.Close()
jsonParser := json.NewDecoder(configFile)
err = jsonParser.Decode(config)
return err
Interval string `json:"interval"`
Duration string `json:"duration"`
}
type RuntimeConfig struct {
@@ -54,7 +39,7 @@ type RuntimeConfig struct {
ReceiveManager receivers.ReceiveManager
MultiChanTicker mct.MultiChanTicker
Channels []chan lp.CCMetric
Channels []chan lp.CCMessage
Sync sync.WaitGroup
}
@@ -87,7 +72,7 @@ func ReadCli() map[string]string {
cfg := flag.String("config", "./config.json", "Path to configuration file")
logfile := flag.String("log", "stderr", "Path for logfile")
once := flag.Bool("once", false, "Run all collectors only once")
debug := flag.Bool("debug", false, "Activate debug output")
loglevel := flag.String("loglevel", "info", "Set log level")
flag.Parse()
m = make(map[string]string)
m["configfile"] = *cfg
@@ -97,12 +82,7 @@ func ReadCli() map[string]string {
} else {
m["once"] = "false"
}
if *debug {
m["debug"] = "true"
cclog.SetDebug()
} else {
m["debug"] = "false"
}
m["loglevel"] = *loglevel
return m
}
@@ -167,87 +147,118 @@ func mainFunc() int {
CliArgs: ReadCli(),
}
// Set loglevel based on command line input.
cclog.Init(rcfg.CliArgs["loglevel"], false)
// Init ccConfig with configuration file
ccconf.Init(rcfg.CliArgs["configfile"])
// Load and check configuration
err = LoadCentralConfiguration(rcfg.CliArgs["configfile"], &rcfg.ConfigFile)
main := ccconf.GetPackageConfig("main")
err = json.Unmarshal(main, &rcfg.ConfigFile)
if err != nil {
cclog.Error("Error reading configuration file ", rcfg.CliArgs["configfile"], ": ", err.Error())
return 1
}
if rcfg.ConfigFile.Interval <= 0 || time.Duration(rcfg.ConfigFile.Interval)*time.Second <= 0 {
cclog.Error("Configuration value 'interval' must be greater than zero")
return 1
}
rcfg.Interval = time.Duration(rcfg.ConfigFile.Interval) * time.Second
if rcfg.ConfigFile.Duration <= 0 || time.Duration(rcfg.ConfigFile.Duration)*time.Second <= 0 {
cclog.Error("Configuration value 'duration' must be greater than zero")
return 1
}
rcfg.Duration = time.Duration(rcfg.ConfigFile.Duration) * time.Second
if len(rcfg.ConfigFile.RouterConfigFile) == 0 {
// Properly use duration parser with inputs like '60s', '5m' or similar
if len(rcfg.ConfigFile.Interval) > 0 {
t, err := time.ParseDuration(rcfg.ConfigFile.Interval)
if err != nil {
cclog.Error("Configuration value 'interval' no valid duration")
}
rcfg.Interval = t
if rcfg.Interval == 0 {
cclog.Error("Configuration value 'interval' must be greater than zero")
return 1
}
}
// Properly use duration parser with inputs like '60s', '5m' or similar
if len(rcfg.ConfigFile.Duration) > 0 {
t, err := time.ParseDuration(rcfg.ConfigFile.Duration)
if err != nil {
cclog.Error("Configuration value 'duration' no valid duration")
}
rcfg.Duration = t
if rcfg.Duration == 0 {
cclog.Error("Configuration value 'duration' must be greater than zero")
return 1
}
}
if rcfg.Duration > rcfg.Interval {
cclog.Error("The interval should be greater than duration")
return 1
}
routerConf := ccconf.GetPackageConfig("router")
if len(routerConf) == 0 {
cclog.Error("Metric router configuration file must be set")
return 1
}
if len(rcfg.ConfigFile.SinkConfigFile) == 0 {
sinkConf := ccconf.GetPackageConfig("sinks")
if len(sinkConf) == 0 {
cclog.Error("Sink configuration file must be set")
return 1
}
if len(rcfg.ConfigFile.CollectorConfigFile) == 0 {
collectorConf := ccconf.GetPackageConfig("collectors")
if len(collectorConf) == 0 {
cclog.Error("Metric collector configuration file must be set")
return 1
}
// Set log file
if logfile := rcfg.CliArgs["logfile"]; logfile != "stderr" {
cclog.SetOutput(logfile)
}
// if logfile := rcfg.CliArgs["logfile"]; logfile != "stderr" {
// cclog.SetOutput(logfile)
// }
// Creat new multi channel ticker
rcfg.MultiChanTicker = mct.NewTicker(rcfg.Interval)
// Create new metric router
rcfg.MetricRouter, err = mr.New(rcfg.MultiChanTicker, &rcfg.Sync, rcfg.ConfigFile.RouterConfigFile)
rcfg.MetricRouter, err = mr.New(rcfg.MultiChanTicker, &rcfg.Sync, routerConf)
if err != nil {
cclog.Error(err.Error())
return 1
}
// Create new sink
rcfg.SinkManager, err = sinks.New(&rcfg.Sync, rcfg.ConfigFile.SinkConfigFile)
rcfg.SinkManager, err = sinks.New(&rcfg.Sync, sinkConf)
if err != nil {
cclog.Error(err.Error())
return 1
}
// Connect metric router to sink manager
RouterToSinksChannel := make(chan lp.CCMetric, 200)
RouterToSinksChannel := make(chan lp.CCMessage, 200)
rcfg.SinkManager.AddInput(RouterToSinksChannel)
rcfg.MetricRouter.AddOutput(RouterToSinksChannel)
// Create new collector manager
rcfg.CollectManager, err = collectors.New(rcfg.MultiChanTicker, rcfg.Duration, &rcfg.Sync, rcfg.ConfigFile.CollectorConfigFile)
rcfg.CollectManager, err = collectors.New(rcfg.MultiChanTicker, rcfg.Duration, &rcfg.Sync, collectorConf)
if err != nil {
cclog.Error(err.Error())
return 1
}
// Connect collector manager to metric router
CollectToRouterChannel := make(chan lp.CCMetric, 200)
CollectToRouterChannel := make(chan lp.CCMessage, 200)
rcfg.CollectManager.AddOutput(CollectToRouterChannel)
rcfg.MetricRouter.AddCollectorInput(CollectToRouterChannel)
// Create new receive manager
if len(rcfg.ConfigFile.ReceiverConfigFile) > 0 {
rcfg.ReceiveManager, err = receivers.New(&rcfg.Sync, rcfg.ConfigFile.ReceiverConfigFile)
receiveConf := ccconf.GetPackageConfig("receivers")
if len(receiveConf) > 0 {
rcfg.ReceiveManager, err = receivers.New(&rcfg.Sync, receiveConf)
if err != nil {
cclog.Error(err.Error())
return 1
}
// Connect receive manager to metric router
ReceiveToRouterChannel := make(chan lp.CCMetric, 200)
ReceiveToRouterChannel := make(chan lp.CCMessage, 200)
rcfg.ReceiveManager.AddOutput(ReceiveToRouterChannel)
rcfg.MetricRouter.AddReceiverInput(ReceiveToRouterChannel)
use_recv = true
@@ -271,7 +282,7 @@ func mainFunc() int {
// Wait until one tick has passed. This is a workaround
if rcfg.CliArgs["once"] == "true" {
x := 1.2 * float64(rcfg.ConfigFile.Interval)
x := 1.2 * float64(rcfg.Interval.Seconds())
time.Sleep(time.Duration(int(x)) * time.Second)
shutdownSignal <- os.Interrupt
}

View File

@@ -12,6 +12,13 @@
"proc_total"
]
},
"memstat": {},
"netstat": {
"include_devices": [
"enp5s0"
],
"send_derived_values": true
},
"numastats": {},
"nvidia": {},
"tempstat": {
@@ -27,5 +34,8 @@
"type-id": "1"
}
}
},
"topprocs": {
"num_procs": 5
}
}

View File

@@ -1,82 +1,33 @@
# Use central installation
CENTRAL_INSTALL = false
# How to access hardware performance counters through LIKWID.
# Recommended is 'direct' mode
ACCESSMODE = direct
# LIKWID version
LIKWID_VERSION := 5.4.1
LIKWID_INSTALLED_FOLDER := $(shell dirname $$(which likwid-topology 2>/dev/null) 2>/dev/null)
#######################################################################
# if CENTRAL_INSTALL == true
#######################################################################
# Path to central installation (if CENTRAL_INSTALL=true)
LIKWID_BASE=/apps/likwid/5.2.1
# LIKWID version (should be same major version as central installation, 5.2.x)
LIKWID_VERSION = 5.2.1
LIKWID_FOLDER := $(CURDIR)/likwid
#######################################################################
# if CENTRAL_INSTALL == false and ACCESSMODE == accessdaemon
#######################################################################
# Where to install the accessdaemon
DAEMON_INSTALLDIR = /usr/local
# Which user to use for the accessdaemon
DAEMON_USER = root
# Which group to use for the accessdaemon
DAEMON_GROUP = root
all: likwid
.ONESHELL:
.PHONY: likwid
likwid:
if [ -n "$(LIKWID_INSTALLED_FOLDER)" ]; then
# Using likwid include files from system installation
INCLUDE_DIR="$(LIKWID_INSTALLED_FOLDER)/../include"
mkdir --parents --verbose "$(LIKWID_FOLDER)"
cp "$${INCLUDE_DIR}"/*.h "$(LIKWID_FOLDER)"
else
# Using likwid include files from downloaded tar archive
if [ -d "$(LIKWID_FOLDER)" ]; then
rm --recursive "$(LIKWID_FOLDER)"
fi
BUILD_FOLDER="$${PWD}/likwidbuild"
mkdir --parents --verbose "$${BUILD_FOLDER}"
wget --output-document=- http://ftp.rrze.uni-erlangen.de/mirrors/likwid/likwid-$(LIKWID_VERSION).tar.gz |
tar --directory="$${BUILD_FOLDER}" --extract --gz
install -D --verbose --preserve-timestamps --mode=0644 --target-directory="$(LIKWID_FOLDER)" "$${BUILD_FOLDER}/likwid-$(LIKWID_VERSION)/src/includes"/likwid*.h
rm --recursive "$${BUILD_FOLDER}"
fi
#################################################
# No need to change anything below this line
#################################################
INSTALL_FOLDER = ./likwid
BUILD_FOLDER = ./likwid/build
ifneq ($(strip $(CENTRAL_INSTALL)),true)
LIKWID_BASE := $(shell pwd)/$(INSTALL_FOLDER)
DAEMON_BASE := $(LIKWID_BASE)
GROUPS_BASE := $(LIKWID_BASE)/groups
all: $(INSTALL_FOLDER)/liblikwid.a cleanup
else
DAEMON_BASE= $(LIKWID_BASE)/sbin
all: $(INSTALL_FOLDER)/liblikwid.a cleanup
endif
$(BUILD_FOLDER)/likwid-$(LIKWID_VERSION).tar.gz: $(BUILD_FOLDER)
wget -P $(BUILD_FOLDER) ftp://ftp.rrze.uni-erlangen.de/mirrors/likwid/likwid-$(LIKWID_VERSION).tar.gz
$(BUILD_FOLDER):
mkdir -p $(BUILD_FOLDER)
$(INSTALL_FOLDER):
mkdir -p $(INSTALL_FOLDER)
$(BUILD_FOLDER)/likwid-$(LIKWID_VERSION): $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION).tar.gz
tar -C $(BUILD_FOLDER) -xf $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION).tar.gz
$(INSTALL_FOLDER)/liblikwid.a: $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION) $(INSTALL_FOLDER)
sed -i -e s+"PREFIX ?= .*"+"PREFIX = $(LIKWID_BASE)"+g \
-e s+"SHARED_LIBRARY = .*"+"SHARED_LIBRARY = false"+g \
-e s+"ACCESSMODE = .*"+"ACCESSMODE = $(ACCESSMODE)"+g \
-e s+"INSTALLED_ACCESSDAEMON = .*"+"INSTALLED_ACCESSDAEMON = $(DAEMON_INSTALLDIR)/likwid-accessD"+g \
$(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/config.mk
cd $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION) && make
cp $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/liblikwid.a $(INSTALL_FOLDER)
cp $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/ext/hwloc/liblikwid-hwloc.a $(INSTALL_FOLDER)
cp $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/src/includes/likwid*.h $(INSTALL_FOLDER)
cp $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/src/includes/bstrlib.h $(INSTALL_FOLDER)
$(DAEMON_INSTALLDIR)/likwid-accessD: $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/likwid-accessD
sudo -u $(DAEMON_USER) -g $(DAEMON_GROUP) install -m 4775 $(BUILD_FOLDER)/likwid-$(LIKWID_VERSION)/likwid-accessD $(DAEMON_INSTALLDIR)/likwid-accessD
prepare_collector: likwidMetric.go
cp likwidMetric.go likwidMetric.go.orig
sed -i -e s+"const GROUPPATH =.*"+"const GROUPPATH = \`$(GROUPS_BASE)\`"+g likwidMetric.go
cleanup:
rm -rf $(BUILD_FOLDER)
clean: cleanup
rm -rf likwid
.PHONY: clean
clean:
rm -rf likwid

View File

@@ -35,8 +35,11 @@ In contrast to the configuration files for sinks and receivers, the collectors c
* [`nfs4stat`](./nfs4Metric.md)
* [`cpufreq`](./cpufreqMetric.md)
* [`cpufreq_cpuinfo`](./cpufreqCpuinfoMetric.md)
* [`numastat`](./numastatMetric.md)
* [`numastats`](./numastatsMetric.md)
* [`gpfs`](./gpfsMetric.md)
* [`beegfs_meta`](./beegfsmetaMetric.md)
* [`beegfs_storage`](./beegfsstorageMetric.md)
* [`rocm_smi`](./rocmsmiMetric.md)
## Todos
@@ -48,7 +51,7 @@ A collector reads data from any source, parses it to metrics and submits these m
* `Name() string`: Return the name of the collector
* `Init(config json.RawMessage) error`: Initializes the collector using the given collector-specific config in JSON. Check if needed files/commands exists, ...
* `Initialized() bool`: Check if a collector is successfully initialized
* `Read(duration time.Duration, output chan ccMetric.CCMetric)`: Read, parse and submit data to the `output` channel as [`CCMetric`](../internal/ccMetric/README.md). If the collector has to measure anything for some duration, use the provided function argument `duration`.
* `Read(duration time.Duration, output chan ccMetric.CCMetric)`: Read, parse and submit data to the `output` channel as [`CCMetric`](../internal/ccMetric/README.md). If the collector has to measure anything for some duration, use the provided function argument `duration`.
* `Close()`: Closes down the collector.
It is recommanded to call `setup()` in the `Init()` function.

View File

@@ -0,0 +1,230 @@
package collectors
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"os/user"
"regexp"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const DEFAULT_BEEGFS_CMD = "beegfs-ctl"
// Struct for the collector-specific JSON config
type BeegfsMetaCollectorConfig struct {
Beegfs string `json:"beegfs_path"`
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
ExcludeFilesystem []string `json:"exclude_filesystem"`
}
type BeegfsMetaCollector struct {
metricCollector
tags map[string]string
matches map[string]string
config BeegfsMetaCollectorConfig
skipFS map[string]struct{}
}
func (m *BeegfsMetaCollector) Init(config json.RawMessage) error {
// Check if already initialized
if m.init {
return nil
}
// Metrics
var nodeMdstat_array = [39]string{
"sum", "ack", "close", "entInf",
"fndOwn", "mkdir", "create", "rddir",
"refrEn", "mdsInf", "rmdir", "rmLnk",
"mvDirIns", "mvFiIns", "open", "ren",
"sChDrct", "sAttr", "sDirPat", "stat",
"statfs", "trunc", "symlnk", "unlnk",
"lookLI", "statLI", "revalLI", "openLI",
"createLI", "hardlnk", "flckAp", "flckEn",
"flckRg", "dirparent", "listXA", "getXA",
"rmXA", "setXA", "mirror"}
m.name = "BeegfsMetaCollector"
m.setup()
m.parallel = true
// Set default beegfs-ctl binary
m.config.Beegfs = DEFAULT_BEEGFS_CMD
// Read JSON configuration
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
return err
}
}
//create map with possible variables
m.matches = make(map[string]string)
for _, value := range nodeMdstat_array {
_, skip := stringArrayContains(m.config.ExcludeMetrics, value)
if skip {
m.matches["other"] = "0"
} else {
m.matches["beegfs_cmeta_"+value] = "0"
}
}
m.meta = map[string]string{
"source": m.name,
"group": "BeegfsMeta",
}
m.tags = map[string]string{
"type": "node",
"filesystem": "",
}
m.skipFS = make(map[string]struct{})
for _, fs := range m.config.ExcludeFilesystem {
m.skipFS[fs] = struct{}{}
}
// Beegfs file system statistics can only be queried by user root
user, err := user.Current()
if err != nil {
return fmt.Errorf("BeegfsMetaCollector.Init(): Failed to get current user: %v", err)
}
if user.Uid != "0" {
return fmt.Errorf("BeegfsMetaCollector.Init(): BeeGFS file system statistics can only be queried by user root")
}
// Check if beegfs-ctl is in executable search path
_, err = exec.LookPath(m.config.Beegfs)
if err != nil {
return fmt.Errorf("BeegfsMetaCollector.Init(): Failed to find beegfs-ctl binary '%s': %v", m.config.Beegfs, err)
}
m.init = true
return nil
}
func (m *BeegfsMetaCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
//get mounpoint
buffer, _ := os.ReadFile(string("/proc/mounts"))
mounts := strings.Split(string(buffer), "\n")
var mountpoints []string
for _, line := range mounts {
if len(line) == 0 {
continue
}
f := strings.Fields(line)
if strings.Contains(f[0], "beegfs_ondemand") {
// Skip excluded filesystems
if _, skip := m.skipFS[f[1]]; skip {
continue
}
mountpoints = append(mountpoints, f[1])
}
}
if len(mountpoints) == 0 {
return
}
for _, mountpoint := range mountpoints {
m.tags["filesystem"] = mountpoint
// bwwgfs-ctl:
// --clientstats: Show client IO statistics.
// --nodetype=meta: The node type to query (meta, storage).
// --interval:
// --mount=/mnt/beeond/: Which mount point
//cmd := exec.Command(m.config.Beegfs, "/root/mc/test.txt")
mountoption := "--mount=" + mountpoint
cmd := exec.Command(m.config.Beegfs, "--clientstats",
"--nodetype=meta", mountoption, "--allstats")
cmd.Stdin = strings.NewReader("\n")
cmdStdout := new(bytes.Buffer)
cmdStderr := new(bytes.Buffer)
cmd.Stdout = cmdStdout
cmd.Stderr = cmdStderr
err := cmd.Run()
if err != nil {
fmt.Fprintf(os.Stderr, "BeegfsMetaCollector.Read(): Failed to execute command \"%s\": %s\n", cmd.String(), err.Error())
fmt.Fprintf(os.Stderr, "BeegfsMetaCollector.Read(): command exit code: \"%d\"\n", cmd.ProcessState.ExitCode())
data, _ := io.ReadAll(cmdStderr)
fmt.Fprintf(os.Stderr, "BeegfsMetaCollector.Read(): command stderr: \"%s\"\n", string(data))
data, _ = io.ReadAll(cmdStdout)
fmt.Fprintf(os.Stderr, "BeegfsMetaCollector.Read(): command stdout: \"%s\"\n", string(data))
return
}
// Read I/O statistics
scanner := bufio.NewScanner(cmdStdout)
sumLine := regexp.MustCompile(`^Sum:\s+\d+\s+\[[a-zA-Z]+\]+`)
//Line := regexp.MustCompile(`^(.*)\s+(\d)+\s+\[([a-zA-Z]+)\]+`)
statsLine := regexp.MustCompile(`^(.*?)\s+?(\d.*?)$`)
singleSpacePattern := regexp.MustCompile(`\s+`)
removePattern := regexp.MustCompile(`[\[|\]]`)
for scanner.Scan() {
readLine := scanner.Text()
//fmt.Println(readLine)
// Jump few lines, we only want the I/O stats from nodes
if !sumLine.MatchString(readLine) {
continue
}
match := statsLine.FindStringSubmatch(readLine)
// nodeName = "Sum:" or would be nodes
// nodeName := match[1]
//Remove multiple whitespaces
dummy := removePattern.ReplaceAllString(match[2], " ")
metaStats := strings.TrimSpace(singleSpacePattern.ReplaceAllString(dummy, " "))
split := strings.Split(metaStats, " ")
// fill map with values
// split[i+1] = mdname
// split[i] = amount of md operations
for i := 0; i <= len(split)-1; i += 2 {
if _, ok := m.matches[split[i+1]]; ok {
m.matches["beegfs_cmeta_"+split[i+1]] = split[i]
} else {
f1, err := strconv.ParseFloat(m.matches["other"], 32)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Metric (other): Failed to convert str written '%s' to float: %v", m.matches["other"], err))
continue
}
f2, err := strconv.ParseFloat(split[i], 32)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Metric (other): Failed to convert str written '%s' to float: %v", m.matches["other"], err))
continue
}
//mdStat["other"] = fmt.Sprintf("%f", f1+f2)
m.matches["beegfs_cstorage_other"] = fmt.Sprintf("%f", f1+f2)
}
}
for key, data := range m.matches {
value, _ := strconv.ParseFloat(data, 32)
y, err := lp.NewMessage(key, m.tags, m.meta, map[string]interface{}{"value": value}, time.Now())
if err == nil {
output <- y
}
}
}
}
}
func (m *BeegfsMetaCollector) Close() {
m.init = false
}

View File

@@ -0,0 +1,75 @@
## `BeeGFS on Demand` collector
This Collector is to collect BeeGFS on Demand (BeeOND) metadata clientstats.
```json
"beegfs_meta": {
"beegfs_path": "/usr/bin/beegfs-ctl",
"exclude_filesystem": [
"/mnt/ignore_me"
],
"exclude_metrics": [
"ack",
"entInf",
"fndOwn"
]
}
```
The `BeeGFS On Demand (BeeOND)` collector uses the `beegfs-ctl` command to read performance metrics for
BeeGFS filesystems.
The reported filesystems can be filtered with the `exclude_filesystem` option
in the configuration.
The path to the `beegfs-ctl` command can be configured with the `beegfs_path` option
in the configuration.
When using the `exclude_metrics` option, the excluded metrics are summed as `other`.
Important: The metrics listed below, are similar to the naming of BeeGFS. The Collector prefixes these with `beegfs_cstorage`(beegfs client storage).
For example beegfs metric `open`-> `beegfs_cstorage_open`
Available Metrics:
* sum
* ack
* close
* entInf
* fndOwn
* mkdir
* create
* rddir
* refrEnt
* mdsInf
* rmdir
* rmLnk
* mvDirIns
* mvFiIns
* open
* ren
* sChDrct
* sAttr
* sDirPat
* stat
* statfs
* trunc
* symlnk
* unlnk
* lookLI
* statLI
* revalLI
* openLI
* createLI
* hardlnk
* flckAp
* flckEn
* flckRg
* dirparent
* listXA
* getXA
* rmXA
* setXA
* mirror
The collector adds a `filesystem` tag to all metrics

View File

@@ -0,0 +1,222 @@
package collectors
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"os/user"
"regexp"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// Struct for the collector-specific JSON config
type BeegfsStorageCollectorConfig struct {
Beegfs string `json:"beegfs_path"`
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
ExcludeFilesystem []string `json:"exclude_filesystem"`
}
type BeegfsStorageCollector struct {
metricCollector
tags map[string]string
matches map[string]string
config BeegfsStorageCollectorConfig
skipFS map[string]struct{}
}
func (m *BeegfsStorageCollector) Init(config json.RawMessage) error {
// Check if already initialized
if m.init {
return nil
}
// Metrics
var storageStat_array = [18]string{
"sum", "ack", "sChDrct", "getFSize",
"sAttr", "statfs", "trunc", "close",
"fsync", "ops-rd", "MiB-rd/s", "ops-wr",
"MiB-wr/s", "gendbg", "hrtbeat", "remNode",
"storInf", "unlnk"}
m.name = "BeegfsStorageCollector"
m.setup()
m.parallel = true
// Set default beegfs-ctl binary
m.config.Beegfs = DEFAULT_BEEGFS_CMD
// Read JSON configuration
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
return err
}
}
println(m.config.Beegfs)
//create map with possible variables
m.matches = make(map[string]string)
for _, value := range storageStat_array {
_, skip := stringArrayContains(m.config.ExcludeMetrics, value)
if skip {
m.matches["other"] = "0"
} else {
m.matches["beegfs_cstorage_"+value] = "0"
}
}
m.meta = map[string]string{
"source": m.name,
"group": "BeegfsStorage",
}
m.tags = map[string]string{
"type": "node",
"filesystem": "",
}
m.skipFS = make(map[string]struct{})
for _, fs := range m.config.ExcludeFilesystem {
m.skipFS[fs] = struct{}{}
}
// Beegfs file system statistics can only be queried by user root
user, err := user.Current()
if err != nil {
return fmt.Errorf("BeegfsStorageCollector.Init(): Failed to get current user: %v", err)
}
if user.Uid != "0" {
return fmt.Errorf("BeegfsStorageCollector.Init(): BeeGFS file system statistics can only be queried by user root")
}
// Check if beegfs-ctl is in executable search path
_, err = exec.LookPath(m.config.Beegfs)
if err != nil {
return fmt.Errorf("BeegfsStorageCollector.Init(): Failed to find beegfs-ctl binary '%s': %v", m.config.Beegfs, err)
}
m.init = true
return nil
}
func (m *BeegfsStorageCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
//get mounpoint
buffer, _ := os.ReadFile(string("/proc/mounts"))
mounts := strings.Split(string(buffer), "\n")
var mountpoints []string
for _, line := range mounts {
if len(line) == 0 {
continue
}
f := strings.Fields(line)
if strings.Contains(f[0], "beegfs_ondemand") {
// Skip excluded filesystems
if _, skip := m.skipFS[f[1]]; skip {
continue
}
mountpoints = append(mountpoints, f[1])
}
}
if len(mountpoints) == 0 {
return
}
// collects stats for each BeeGFS on Demand FS
for _, mountpoint := range mountpoints {
m.tags["filesystem"] = mountpoint
// bwwgfs-ctl:
// --clientstats: Show client IO statistics.
// --nodetype=meta: The node type to query (meta, storage).
// --interval:
// --mount=/mnt/beeond/: Which mount point
//cmd := exec.Command(m.config.Beegfs, "/root/mc/test.txt")
mountoption := "--mount=" + mountpoint
cmd := exec.Command(m.config.Beegfs, "--clientstats",
"--nodetype=storage", mountoption, "--allstats")
cmd.Stdin = strings.NewReader("\n")
cmdStdout := new(bytes.Buffer)
cmdStderr := new(bytes.Buffer)
cmd.Stdout = cmdStdout
cmd.Stderr = cmdStderr
err := cmd.Run()
if err != nil {
fmt.Fprintf(os.Stderr, "BeegfsStorageCollector.Read(): Failed to execute command \"%s\": %s\n", cmd.String(), err.Error())
fmt.Fprintf(os.Stderr, "BeegfsStorageCollector.Read(): command exit code: \"%d\"\n", cmd.ProcessState.ExitCode())
data, _ := io.ReadAll(cmdStderr)
fmt.Fprintf(os.Stderr, "BeegfsStorageCollector.Read(): command stderr: \"%s\"\n", string(data))
data, _ = io.ReadAll(cmdStdout)
fmt.Fprintf(os.Stderr, "BeegfsStorageCollector.Read(): command stdout: \"%s\"\n", string(data))
return
}
// Read I/O statistics
scanner := bufio.NewScanner(cmdStdout)
sumLine := regexp.MustCompile(`^Sum:\s+\d+\s+\[[a-zA-Z]+\]+`)
//Line := regexp.MustCompile(`^(.*)\s+(\d)+\s+\[([a-zA-Z]+)\]+`)
statsLine := regexp.MustCompile(`^(.*?)\s+?(\d.*?)$`)
singleSpacePattern := regexp.MustCompile(`\s+`)
removePattern := regexp.MustCompile(`[\[|\]]`)
for scanner.Scan() {
readLine := scanner.Text()
//fmt.Println(readLine)
// Jump few lines, we only want the I/O stats from nodes
if !sumLine.MatchString(readLine) {
continue
}
match := statsLine.FindStringSubmatch(readLine)
// nodeName = "Sum:" or would be nodes
// nodeName := match[1]
//Remove multiple whitespaces
dummy := removePattern.ReplaceAllString(match[2], " ")
metaStats := strings.TrimSpace(singleSpacePattern.ReplaceAllString(dummy, " "))
split := strings.Split(metaStats, " ")
// fill map with values
// split[i+1] = mdname
// split[i] = amount of operations
for i := 0; i <= len(split)-1; i += 2 {
if _, ok := m.matches[split[i+1]]; ok {
m.matches["beegfs_cstorage_"+split[i+1]] = split[i]
//m.matches[split[i+1]] = split[i]
} else {
f1, err := strconv.ParseFloat(m.matches["other"], 32)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Metric (other): Failed to convert str written '%s' to float: %v", m.matches["other"], err))
continue
}
f2, err := strconv.ParseFloat(split[i], 32)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Metric (other): Failed to convert str written '%s' to float: %v", m.matches["other"], err))
continue
}
m.matches["beegfs_cstorage_other"] = fmt.Sprintf("%f", f1+f2)
}
}
for key, data := range m.matches {
value, _ := strconv.ParseFloat(data, 32)
y, err := lp.NewMessage(key, m.tags, m.meta, map[string]interface{}{"value": value}, time.Now())
if err == nil {
output <- y
}
}
}
}
}
func (m *BeegfsStorageCollector) Close() {
m.init = false
}

View File

@@ -0,0 +1,55 @@
## `BeeGFS on Demand` collector
This Collector is to collect BeeGFS on Demand (BeeOND) storage stats.
```json
"beegfs_storage": {
"beegfs_path": "/usr/bin/beegfs-ctl",
"exclude_filesystem": [
"/mnt/ignore_me"
],
"exclude_metrics": [
"ack",
"storInf",
"unlnk"
]
}
```
The `BeeGFS On Demand (BeeOND)` collector uses the `beegfs-ctl` command to read performance metrics for BeeGFS filesystems.
The reported filesystems can be filtered with the `exclude_filesystem` option
in the configuration.
The path to the `beegfs-ctl` command can be configured with the `beegfs_path` option
in the configuration.
When using the `exclude_metrics` option, the excluded metrics are summed as `other`.
Important: The metrics listed below, are similar to the naming of BeeGFS. The Collector prefixes these with `beegfs_cstorage_`(beegfs client meta).
For example beegfs metric `open`-> `beegfs_cstorage_`
Note: BeeGFS FS offers many Metadata Information. Probably it makes sense to exlcude most of them. Nevertheless, these excluded metrics will be summed as `beegfs_cstorage_other`.
Available Metrics:
* "sum"
* "ack"
* "sChDrct"
* "getFSize"
* "sAttr"
* "statfs"
* "trunc"
* "close"
* "fsync"
* "ops-rd"
* "MiB-rd/s"
* "ops-wr"
* "MiB-wr/s"
* "endbg"
* "hrtbeat"
* "remNode"
* "storInf"
* "unlnk"
The collector adds a `filesystem` tag to all metrics

View File

@@ -2,56 +2,64 @@ package collectors
import (
"encoding/json"
"os"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
mct "github.com/ClusterCockpit/cc-metric-collector/internal/multiChanTicker"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
// Map of all available metric collectors
var AvailableCollectors = map[string]MetricCollector{
"likwid": new(LikwidCollector),
"loadavg": new(LoadavgCollector),
"memstat": new(MemstatCollector),
"netstat": new(NetstatCollector),
"ibstat": new(InfinibandCollector),
"ibstat_perfquery": new(InfinibandPerfQueryCollector),
"lustrestat": new(LustreCollector),
"cpustat": new(CpustatCollector),
"topprocs": new(TopProcsCollector),
"nvidia": new(NvidiaCollector),
"customcmd": new(CustomCmdCollector),
"iostat": new(IOstatCollector),
"diskstat": new(DiskstatCollector),
"tempstat": new(TempCollector),
"ipmistat": new(IpmiCollector),
"gpfs": new(GpfsCollector),
"cpufreq": new(CPUFreqCollector),
"cpufreq_cpuinfo": new(CPUFreqCpuInfoCollector),
"nfs3stat": new(Nfs3Collector),
"nfs4stat": new(Nfs4Collector),
"numastats": new(NUMAStatsCollector),
"likwid": new(LikwidCollector),
"loadavg": new(LoadavgCollector),
"memstat": new(MemstatCollector),
"netstat": new(NetstatCollector),
"ibstat": new(InfinibandCollector),
"lustrestat": new(LustreCollector),
"cpustat": new(CpustatCollector),
"topprocs": new(TopProcsCollector),
"nvidia": new(NvidiaCollector),
"customcmd": new(CustomCmdCollector),
"iostat": new(IOstatCollector),
"diskstat": new(DiskstatCollector),
"tempstat": new(TempCollector),
"ipmistat": new(IpmiCollector),
"gpfs": new(GpfsCollector),
"cpufreq": new(CPUFreqCollector),
"cpufreq_cpuinfo": new(CPUFreqCpuInfoCollector),
"nfs3stat": new(Nfs3Collector),
"nfs4stat": new(Nfs4Collector),
"numastats": new(NUMAStatsCollector),
"beegfs_meta": new(BeegfsMetaCollector),
"beegfs_storage": new(BeegfsStorageCollector),
"rapl": new(RAPLCollector),
"rocm_smi": new(RocmSmiCollector),
"self": new(SelfCollector),
"schedstat": new(SchedstatCollector),
"nfsiostat": new(NfsIOStatCollector),
}
// Metric collector manager data structure
type collectorManager struct {
collectors []MetricCollector // List of metric collectors to use
output chan lp.CCMetric // Output channels
done chan bool // channel to finish / stop metric collector manager
ticker mct.MultiChanTicker // periodically ticking once each interval
duration time.Duration // duration (for metrics that measure over a given duration)
wg *sync.WaitGroup // wait group for all goroutines in cc-metric-collector
config map[string]json.RawMessage // json encoded config for collector manager
collectors []MetricCollector // List of metric collectors to read in parallel
serial []MetricCollector // List of metric collectors to read serially
output chan lp.CCMessage // Output channels
done chan bool // channel to finish / stop metric collector manager
ticker mct.MultiChanTicker // periodically ticking once each interval
duration time.Duration // duration (for metrics that measure over a given duration)
wg *sync.WaitGroup // wait group for all goroutines in cc-metric-collector
config map[string]json.RawMessage // json encoded config for collector manager
collector_wg sync.WaitGroup // internally used wait group for the parallel reading of collector
parallel_run bool // Flag whether the collectors are currently read in parallel
}
// Metric collector manager access functions
type CollectorManager interface {
Init(ticker mct.MultiChanTicker, duration time.Duration, wg *sync.WaitGroup, collectConfigFile string) error
AddOutput(output chan lp.CCMetric)
Init(ticker mct.MultiChanTicker, duration time.Duration, wg *sync.WaitGroup, collectConfig json.RawMessage) error
AddOutput(output chan lp.CCMessage)
Start()
Close()
}
@@ -63,23 +71,16 @@ type CollectorManager interface {
// * ticker (from variable ticker)
// * configuration (read from config file in variable collectConfigFile)
// Initialization is done for all configured collectors
func (cm *collectorManager) Init(ticker mct.MultiChanTicker, duration time.Duration, wg *sync.WaitGroup, collectConfigFile string) error {
func (cm *collectorManager) Init(ticker mct.MultiChanTicker, duration time.Duration, wg *sync.WaitGroup, collectConfig json.RawMessage) error {
cm.collectors = make([]MetricCollector, 0)
cm.serial = make([]MetricCollector, 0)
cm.output = nil
cm.done = make(chan bool)
cm.wg = wg
cm.ticker = ticker
cm.duration = duration
// Read collector config file
configFile, err := os.Open(collectConfigFile)
if err != nil {
cclog.Error(err.Error())
return err
}
defer configFile.Close()
jsonParser := json.NewDecoder(configFile)
err = jsonParser.Decode(&cm.config)
err := json.Unmarshal(collectConfig, &cm.config)
if err != nil {
cclog.Error(err.Error())
return err
@@ -99,7 +100,11 @@ func (cm *collectorManager) Init(ticker mct.MultiChanTicker, duration time.Durat
continue
}
cclog.ComponentDebug("CollectorManager", "ADD COLLECTOR", collector.Name())
cm.collectors = append(cm.collectors, collector)
if collector.Parallel() {
cm.collectors = append(cm.collectors, collector)
} else {
cm.serial = append(cm.serial, collector)
}
}
return nil
}
@@ -115,6 +120,10 @@ func (cm *collectorManager) Start() {
// Collector manager is done
done := func() {
// close all metric collectors
if cm.parallel_run {
cm.collector_wg.Wait()
cm.parallel_run = false
}
for _, c := range cm.collectors {
c.Close()
}
@@ -129,7 +138,26 @@ func (cm *collectorManager) Start() {
done()
return
case t := <-tick:
cm.parallel_run = true
for _, c := range cm.collectors {
// Wait for done signal or execute the collector
select {
case <-cm.done:
done()
return
default:
// Read metrics from collector c via goroutine
cclog.ComponentDebug("CollectorManager", c.Name(), t)
cm.collector_wg.Add(1)
go func(myc MetricCollector) {
myc.Read(cm.duration, cm.output)
cm.collector_wg.Done()
}(c)
}
}
cm.collector_wg.Wait()
cm.parallel_run = false
for _, c := range cm.serial {
// Wait for done signal or execute the collector
select {
case <-cm.done:
@@ -150,7 +178,7 @@ func (cm *collectorManager) Start() {
}
// AddOutput adds the output channel to the metric collector manager
func (cm *collectorManager) AddOutput(output chan lp.CCMetric) {
func (cm *collectorManager) AddOutput(output chan lp.CCMessage) {
cm.output = output
}
@@ -163,9 +191,9 @@ func (cm *collectorManager) Close() {
}
// New creates a new initialized metric collector manager
func New(ticker mct.MultiChanTicker, duration time.Duration, wg *sync.WaitGroup, collectConfigFile string) (CollectorManager, error) {
func New(ticker mct.MultiChanTicker, duration time.Duration, wg *sync.WaitGroup, collectConfig json.RawMessage) (CollectorManager, error) {
cm := new(collectorManager)
err := cm.Init(ticker, duration, wg, collectConfigFile)
err := cm.Init(ticker, duration, wg, collectConfig)
if err != nil {
return nil, err
}

View File

@@ -10,33 +10,22 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
//
// CPUFreqCollector
// a metric collector to measure the current frequency of the CPUs
// as obtained from /proc/cpuinfo
// Only measure on the first hyperthread
//
type CPUFreqCpuInfoCollectorTopology struct {
processor string // logical processor number (continuous, starting at 0)
coreID string // socket local core ID
coreID_int int64
physicalPackageID string // socket / package ID
physicalPackageID_int int64
numPhysicalPackages string // number of sockets / packages
numPhysicalPackages_int int64
isHT bool
numNonHT string // number of non hyperthreading processors
numNonHT_int int64
tagSet map[string]string
isHT bool
tagSet map[string]string
}
type CPUFreqCpuInfoCollector struct {
metricCollector
topology []*CPUFreqCpuInfoCollectorTopology
topology []CPUFreqCpuInfoCollectorTopology
}
func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
@@ -48,6 +37,7 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
m.setup()
m.name = "CPUFreqCpuInfoCollector"
m.parallel = true
m.meta = map[string]string{
"source": m.name,
"group": "CPU",
@@ -57,18 +47,16 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
const cpuInfoFile = "/proc/cpuinfo"
file, err := os.Open(cpuInfoFile)
if err != nil {
return fmt.Errorf("Failed to open file '%s': %v", cpuInfoFile, err)
return fmt.Errorf("failed to open file '%s': %v", cpuInfoFile, err)
}
defer file.Close()
// Collect topology information from file cpuinfo
foundFreq := false
processor := ""
var numNonHT_int int64 = 0
coreID := ""
physicalPackageID := ""
var maxPhysicalPackageID int64 = 0
m.topology = make([]*CPUFreqCpuInfoCollectorTopology, 0)
m.topology = make([]CPUFreqCpuInfoCollectorTopology, 0)
coreSeenBefore := make(map[string]bool)
// Read cpuinfo file, line by line
@@ -97,41 +85,22 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
len(coreID) > 0 &&
len(physicalPackageID) > 0 {
topology := new(CPUFreqCpuInfoCollectorTopology)
// Processor
topology.processor = processor
// Core ID
topology.coreID = coreID
topology.coreID_int, err = strconv.ParseInt(coreID, 10, 64)
if err != nil {
return fmt.Errorf("Unable to convert coreID '%s' to int64: %v", coreID, err)
}
// Physical package ID
topology.physicalPackageID = physicalPackageID
topology.physicalPackageID_int, err = strconv.ParseInt(physicalPackageID, 10, 64)
if err != nil {
return fmt.Errorf("Unable to convert physicalPackageID '%s' to int64: %v", physicalPackageID, err)
}
// increase maximun socket / package ID, when required
if topology.physicalPackageID_int > maxPhysicalPackageID {
maxPhysicalPackageID = topology.physicalPackageID_int
}
// is hyperthread?
globalID := physicalPackageID + ":" + coreID
topology.isHT = coreSeenBefore[globalID]
coreSeenBefore[globalID] = true
if !topology.isHT {
// increase number on non hyper thread cores
numNonHT_int++
}
// store collected topology information
m.topology = append(m.topology, topology)
m.topology = append(m.topology,
CPUFreqCpuInfoCollectorTopology{
isHT: coreSeenBefore[globalID],
tagSet: map[string]string{
"type": "hwthread",
"type-id": processor,
"package_id": physicalPackageID,
},
},
)
// mark core as seen before
coreSeenBefore[globalID] = true
// reset topology information
foundFreq = false
@@ -141,26 +110,16 @@ func (m *CPUFreqCpuInfoCollector) Init(config json.RawMessage) error {
}
}
numPhysicalPackageID_int := maxPhysicalPackageID + 1
numPhysicalPackageID := fmt.Sprint(numPhysicalPackageID_int)
numNonHT := fmt.Sprint(numNonHT_int)
for _, t := range m.topology {
t.numPhysicalPackages = numPhysicalPackageID
t.numPhysicalPackages_int = numPhysicalPackageID_int
t.numNonHT = numNonHT
t.numNonHT_int = numNonHT_int
t.tagSet = map[string]string{
"type": "cpu",
"type-id": t.processor,
"package_id": t.physicalPackageID,
}
// Check if at least one CPU with frequency information was detected
if len(m.topology) == 0 {
return fmt.Errorf("no CPU frequency info found in %s", cpuInfoFile)
}
m.init = true
return nil
}
func (m *CPUFreqCpuInfoCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *CPUFreqCpuInfoCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Check if already initialized
if !m.init {
return
@@ -195,7 +154,7 @@ func (m *CPUFreqCpuInfoCollector) Read(interval time.Duration, output chan lp.CC
fmt.Sprintf("Read(): Failed to convert cpu MHz '%s' to float64: %v", lineSplit[1], err))
return
}
if y, err := lp.New("cpufreq", t.tagSet, m.meta, map[string]interface{}{"value": value}, now); err == nil {
if y, err := lp.NewMessage("cpufreq", t.tagSet, m.meta, map[string]interface{}{"value": value}, now); err == nil {
output <- y
}
}

View File

@@ -1,10 +1,11 @@
## `cpufreq_cpuinfo` collector
```json
"cpufreq_cpuinfo": {}
```
The `cpufreq_cpuinfo` collector reads the clock frequency from `/proc/cpuinfo` and outputs a handful **cpu** metrics.
The `cpufreq_cpuinfo` collector reads the clock frequency from `/proc/cpuinfo` and outputs a handful **hwthread** metrics.
Metrics:
* `cpufreq`

View File

@@ -3,40 +3,29 @@ package collectors
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
"github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
"golang.org/x/sys/unix"
)
type CPUFreqCollectorTopology struct {
processor string // logical processor number (continuous, starting at 0)
coreID string // socket local core ID
coreID_int int64
physicalPackageID string // socket / package ID
physicalPackageID_int int64
numPhysicalPackages string // number of sockets / packages
numPhysicalPackages_int int64
isHT bool
numNonHT string // number of non hyperthreading processors
numNonHT_int int64
scalingCurFreqFile string
tagSet map[string]string
scalingCurFreqFile string
tagSet map[string]string
}
//
// CPUFreqCollector
// a metric collector to measure the current frequency of the CPUs
// as obtained from the hardware (in KHz)
// Only measure on the first hyper thread
// Only measure on the first hyper-thread
//
// See: https://www.kernel.org/doc/html/latest/admin-guide/pm/cpufreq.html
//
type CPUFreqCollector struct {
metricCollector
topology []CPUFreqCollectorTopology
@@ -53,6 +42,7 @@ func (m *CPUFreqCollector) Init(config json.RawMessage) error {
m.name = "CPUFreqCollector"
m.setup()
m.parallel = true
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
@@ -62,116 +52,46 @@ func (m *CPUFreqCollector) Init(config json.RawMessage) error {
m.meta = map[string]string{
"source": m.name,
"group": "CPU",
"unit": "MHz",
"unit": "Hz",
}
// Loop for all CPU directories
baseDir := "/sys/devices/system/cpu"
globPattern := filepath.Join(baseDir, "cpu[0-9]*")
cpuDirs, err := filepath.Glob(globPattern)
if err != nil {
return fmt.Errorf("Unable to glob files with pattern '%s': %v", globPattern, err)
}
if cpuDirs == nil {
return fmt.Errorf("Unable to find any files with pattern '%s'", globPattern)
}
m.topology = make([]CPUFreqCollectorTopology, 0)
for _, c := range ccTopology.CpuData() {
// Initialize CPU topology
m.topology = make([]CPUFreqCollectorTopology, len(cpuDirs))
for _, cpuDir := range cpuDirs {
processor := strings.TrimPrefix(cpuDir, "/sys/devices/system/cpu/cpu")
processor_int, err := strconv.ParseInt(processor, 10, 64)
if err != nil {
return fmt.Errorf("Unable to convert cpuID '%s' to int64: %v", processor, err)
}
// Read package ID
physicalPackageIDFile := filepath.Join(cpuDir, "topology", "physical_package_id")
line, err := ioutil.ReadFile(physicalPackageIDFile)
if err != nil {
return fmt.Errorf("Unable to read physical package ID from file '%s': %v", physicalPackageIDFile, err)
}
physicalPackageID := strings.TrimSpace(string(line))
physicalPackageID_int, err := strconv.ParseInt(physicalPackageID, 10, 64)
if err != nil {
return fmt.Errorf("Unable to convert packageID '%s' to int64: %v", physicalPackageID, err)
}
// Read core ID
coreIDFile := filepath.Join(cpuDir, "topology", "core_id")
line, err = ioutil.ReadFile(coreIDFile)
if err != nil {
return fmt.Errorf("Unable to read core ID from file '%s': %v", coreIDFile, err)
}
coreID := strings.TrimSpace(string(line))
coreID_int, err := strconv.ParseInt(coreID, 10, 64)
if err != nil {
return fmt.Errorf("Unable to convert coreID '%s' to int64: %v", coreID, err)
// Skip hyper threading CPUs
if c.CpuID != c.CoreCPUsList[0] {
continue
}
// Check access to current frequency file
scalingCurFreqFile := filepath.Join(cpuDir, "cpufreq", "scaling_cur_freq")
err = unix.Access(scalingCurFreqFile, unix.R_OK)
scalingCurFreqFile := filepath.Join("/sys/devices/system/cpu", fmt.Sprintf("cpu%d", c.CpuID), "cpufreq/scaling_cur_freq")
err := unix.Access(scalingCurFreqFile, unix.R_OK)
if err != nil {
return fmt.Errorf("Unable to access file '%s': %v", scalingCurFreqFile, err)
return fmt.Errorf("unable to access file '%s': %v", scalingCurFreqFile, err)
}
t := &m.topology[processor_int]
t.processor = processor
t.physicalPackageID = physicalPackageID
t.physicalPackageID_int = physicalPackageID_int
t.coreID = coreID
t.coreID_int = coreID_int
t.scalingCurFreqFile = scalingCurFreqFile
}
// is processor a hyperthread?
coreSeenBefore := make(map[string]bool)
for i := range m.topology {
t := &m.topology[i]
globalID := t.physicalPackageID + ":" + t.coreID
t.isHT = coreSeenBefore[globalID]
coreSeenBefore[globalID] = true
}
// number of non hyper thread cores and packages / sockets
var numNonHT_int int64 = 0
var maxPhysicalPackageID int64 = 0
for i := range m.topology {
t := &m.topology[i]
// Update maxPackageID
if t.physicalPackageID_int > maxPhysicalPackageID {
maxPhysicalPackageID = t.physicalPackageID_int
}
if !t.isHT {
numNonHT_int++
}
}
numPhysicalPackageID_int := maxPhysicalPackageID + 1
numPhysicalPackageID := fmt.Sprint(numPhysicalPackageID_int)
numNonHT := fmt.Sprint(numNonHT_int)
for i := range m.topology {
t := &m.topology[i]
t.numPhysicalPackages = numPhysicalPackageID
t.numPhysicalPackages_int = numPhysicalPackageID_int
t.numNonHT = numNonHT
t.numNonHT_int = numNonHT_int
t.tagSet = map[string]string{
"type": "cpu",
"type-id": t.processor,
"package_id": t.physicalPackageID,
}
m.topology = append(m.topology,
CPUFreqCollectorTopology{
tagSet: map[string]string{
"type": "hwthread",
"type-id": fmt.Sprint(c.CpuID),
"package_id": fmt.Sprint(c.Socket),
},
scalingCurFreqFile: scalingCurFreqFile,
},
)
}
// Initialized
cclog.ComponentDebug(
m.name,
"initialized",
len(m.topology), "non-hyper-threading CPUs")
m.init = true
return nil
}
func (m *CPUFreqCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *CPUFreqCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Check if already initialized
if !m.init {
return
@@ -181,13 +101,8 @@ func (m *CPUFreqCollector) Read(interval time.Duration, output chan lp.CCMetric)
for i := range m.topology {
t := &m.topology[i]
// skip hyperthreads
if t.isHT {
continue
}
// Read current frequency
line, err := ioutil.ReadFile(t.scalingCurFreqFile)
line, err := os.ReadFile(t.scalingCurFreqFile)
if err != nil {
cclog.ComponentError(
m.name,
@@ -202,7 +117,7 @@ func (m *CPUFreqCollector) Read(interval time.Duration, output chan lp.CCMetric)
continue
}
if y, err := lp.New("cpufreq", t.tagSet, m.meta, map[string]interface{}{"value": cpuFreq}, now); err == nil {
if y, err := lp.NewMessage("cpufreq", t.tagSet, m.meta, map[string]interface{}{"value": cpuFreq}, now); err == nil {
output <- y
}
}

View File

@@ -1,11 +1,13 @@
## `cpufreq_cpuinfo` collector
```json
"cpufreq": {
"exclude_metrics": []
}
```
The `cpufreq` collector reads the clock frequency from `/sys/devices/system/cpu/cpu*/cpufreq` and outputs a handful **cpu** metrics.
The `cpufreq` collector reads the clock frequency from `/sys/devices/system/cpu/cpu*/cpufreq` and outputs a handful **hwthread** metrics.
Metrics:
* `cpufreq`
* `cpufreq`

View File

@@ -9,8 +9,9 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
sysconf "github.com/tklauser/go-sysconf"
)
const CPUSTATFILE = `/proc/stat`
@@ -21,17 +22,19 @@ type CpustatCollectorConfig struct {
type CpustatCollector struct {
metricCollector
config CpustatCollectorConfig
matches map[string]int
cputags map[string]map[string]string
nodetags map[string]string
num_cpus_metric lp.CCMetric
config CpustatCollectorConfig
lastTimestamp time.Time // Store time stamp of last tick to derive values
matches map[string]int
cputags map[string]map[string]string
nodetags map[string]string
olddata map[string]map[string]int64
}
func (m *CpustatCollector) Init(config json.RawMessage) error {
m.name = "CpustatCollector"
m.setup()
m.meta = map[string]string{"source": m.name, "group": "CPU", "unit": "Percent"}
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "CPU"}
m.nodetags = map[string]string{"type": "node"}
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
@@ -76,47 +79,73 @@ func (m *CpustatCollector) Init(config json.RawMessage) error {
// Pre-generate tags for all CPUs
num_cpus := 0
m.cputags = make(map[string]map[string]string)
m.olddata = make(map[string]map[string]int64)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
linefields := strings.Fields(line)
if strings.HasPrefix(linefields[0], "cpu") && strings.Compare(linefields[0], "cpu") != 0 {
if strings.Compare(linefields[0], "cpu") == 0 {
m.olddata["cpu"] = make(map[string]int64)
for k, v := range m.matches {
m.olddata["cpu"][k], _ = strconv.ParseInt(linefields[v], 0, 64)
}
} else if strings.HasPrefix(linefields[0], "cpu") && strings.Compare(linefields[0], "cpu") != 0 {
cpustr := strings.TrimLeft(linefields[0], "cpu")
cpu, _ := strconv.Atoi(cpustr)
m.cputags[linefields[0]] = map[string]string{"type": "cpu", "type-id": fmt.Sprintf("%d", cpu)}
m.cputags[linefields[0]] = map[string]string{"type": "hwthread", "type-id": fmt.Sprintf("%d", cpu)}
m.olddata[linefields[0]] = make(map[string]int64)
for k, v := range m.matches {
m.olddata[linefields[0]][k], _ = strconv.ParseInt(linefields[v], 0, 64)
}
num_cpus++
}
}
m.lastTimestamp = time.Now()
m.init = true
return nil
}
func (m *CpustatCollector) parseStatLine(linefields []string, tags map[string]string, output chan lp.CCMetric) {
func (m *CpustatCollector) parseStatLine(linefields []string, tags map[string]string, output chan lp.CCMessage, now time.Time, tsdelta time.Duration) {
values := make(map[string]float64)
total := 0.0
clktck, _ := sysconf.Sysconf(sysconf.SC_CLK_TCK)
for match, index := range m.matches {
if len(match) > 0 {
x, err := strconv.ParseInt(linefields[index], 0, 64)
if err == nil {
values[match] = float64(x)
total += values[match]
vdiff := x - m.olddata[linefields[0]][match]
m.olddata[linefields[0]][match] = x // Store new value for next run
values[match] = float64(vdiff) / float64(tsdelta.Seconds()) / float64(clktck)
}
}
}
t := time.Now()
sum := float64(0)
for name, value := range values {
y, err := lp.New(name, tags, m.meta, map[string]interface{}{"value": (value * 100.0) / total}, t)
sum += value
y, err := lp.NewMessage(name, tags, m.meta, map[string]interface{}{"value": value * 100}, now)
if err == nil {
y.AddTag("unit", "Percent")
output <- y
}
}
if v, ok := values["cpu_idle"]; ok {
sum -= v
y, err := lp.NewMessage("cpu_used", tags, m.meta, map[string]interface{}{"value": sum * 100}, now)
if err == nil {
y.AddTag("unit", "Percent")
output <- y
}
}
}
func (m *CpustatCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *CpustatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
num_cpus := 0
now := time.Now()
tsdelta := now.Sub(m.lastTimestamp)
file, err := os.Open(string(CPUSTATFILE))
if err != nil {
cclog.ComponentError(m.name, err.Error())
@@ -128,22 +157,24 @@ func (m *CpustatCollector) Read(interval time.Duration, output chan lp.CCMetric)
line := scanner.Text()
linefields := strings.Fields(line)
if strings.Compare(linefields[0], "cpu") == 0 {
m.parseStatLine(linefields, m.nodetags, output)
m.parseStatLine(linefields, m.nodetags, output, now, tsdelta)
} else if strings.HasPrefix(linefields[0], "cpu") {
m.parseStatLine(linefields, m.cputags[linefields[0]], output)
m.parseStatLine(linefields, m.cputags[linefields[0]], output, now, tsdelta)
num_cpus++
}
}
num_cpus_metric, err := lp.New("num_cpus",
num_cpus_metric, err := lp.NewMessage("num_cpus",
m.nodetags,
m.meta,
map[string]interface{}{"value": int(num_cpus)},
time.Now(),
now,
)
if err == nil {
output <- num_cpus_metric
}
m.lastTimestamp = now
}
func (m *CpustatCollector) Close() {

View File

@@ -1,5 +1,6 @@
## `cpustat` collector
```json
"cpustat": {
"exclude_metrics": [
@@ -8,16 +9,19 @@
}
```
The `cpustat` collector reads data from `/proc/stats` and outputs a handful **node** and **hwthread** metrics. If a metric is not required, it can be excluded from forwarding it to the sink.
The `cpustat` collector reads data from `/proc/stat` and outputs a handful **node** and **hwthread** metrics. If a metric is not required, it can be excluded from forwarding it to the sink.
Metrics:
* `cpu_user`
* `cpu_nice`
* `cpu_system`
* `cpu_idle`
* `cpu_iowait`
* `cpu_irq`
* `cpu_softirq`
* `cpu_steal`
* `cpu_guest`
* `cpu_guest_nice`
* `cpu_user` with `unit=Percent`
* `cpu_nice` with `unit=Percent`
* `cpu_system` with `unit=Percent`
* `cpu_idle` with `unit=Percent`
* `cpu_iowait` with `unit=Percent`
* `cpu_irq` with `unit=Percent`
* `cpu_softirq` with `unit=Percent`
* `cpu_steal` with `unit=Percent`
* `cpu_guest` with `unit=Percent`
* `cpu_guest_nice` with `unit=Percent`
* `cpu_used` = `cpu_* - cpu_idle` with `unit=Percent`
* `num_cpus`

View File

@@ -3,13 +3,13 @@ package collectors
import (
"encoding/json"
"errors"
"io/ioutil"
"log"
"os"
"os/exec"
"strings"
"time"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
influx "github.com/influxdata/line-protocol"
)
@@ -33,6 +33,7 @@ type CustomCmdCollector struct {
func (m *CustomCmdCollector) Init(config json.RawMessage) error {
var err error
m.name = "CustomCmdCollector"
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Custom"}
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
@@ -47,12 +48,12 @@ func (m *CustomCmdCollector) Init(config json.RawMessage) error {
command := exec.Command(cmdfields[0], strings.Join(cmdfields[1:], " "))
command.Wait()
_, err = command.Output()
if err != nil {
if err == nil {
m.commands = append(m.commands, c)
}
}
for _, f := range m.config.Files {
_, err = ioutil.ReadFile(f)
_, err = os.ReadFile(f)
if err == nil {
m.files = append(m.files, f)
} else {
@@ -61,7 +62,7 @@ func (m *CustomCmdCollector) Init(config json.RawMessage) error {
}
}
if len(m.files) == 0 && len(m.commands) == 0 {
return errors.New("No metrics to collect")
return errors.New("no metrics to collect")
}
m.handler = influx.NewMetricHandler()
m.parser = influx.NewParser(m.handler)
@@ -74,7 +75,7 @@ var DefaultTime = func() time.Time {
return time.Unix(42, 0)
}
func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
@@ -98,14 +99,11 @@ func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMetri
continue
}
y := lp.FromInfluxMetric(c)
if err == nil {
output <- y
}
output <- lp.FromInfluxMetric(c)
}
}
for _, file := range m.files {
buffer, err := ioutil.ReadFile(file)
buffer, err := os.ReadFile(file)
if err != nil {
log.Print(err)
return
@@ -120,10 +118,7 @@ func (m *CustomCmdCollector) Read(interval time.Duration, output chan lp.CCMetri
if skip {
continue
}
y := lp.FromInfluxMetric(f)
if err == nil {
output <- y
}
output <- lp.FromInfluxMetric(f)
}
}
}

View File

@@ -3,42 +3,49 @@ package collectors
import (
"bufio"
"encoding/json"
"fmt"
"os"
"strings"
"syscall"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// "log"
const MOUNTFILE = `/proc/self/mounts`
type DiskstatCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
ExcludeMounts []string `json:"exclude_mounts,omitempty"`
}
type DiskstatCollector struct {
metricCollector
//matches map[string]int
config IOstatCollectorConfig
//devices map[string]IOstatCollectorEntry
config DiskstatCollectorConfig
allowedMetrics map[string]bool
}
func (m *DiskstatCollector) Init(config json.RawMessage) error {
m.name = "DiskstatCollector"
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Disk"}
m.setup()
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
if err := json.Unmarshal(config, &m.config); err != nil {
return err
}
}
file, err := os.Open(string(MOUNTFILE))
m.allowedMetrics = map[string]bool{
"disk_total": true,
"disk_free": true,
"part_max_used": true,
}
for _, excl := range m.config.ExcludeMetrics {
if _, ok := m.allowedMetrics[excl]; ok {
m.allowedMetrics[excl] = false
}
}
file, err := os.Open(MOUNTFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return err
@@ -48,12 +55,12 @@ func (m *DiskstatCollector) Init(config json.RawMessage) error {
return nil
}
func (m *DiskstatCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *DiskstatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
file, err := os.Open(string(MOUNTFILE))
file, err := os.Open(MOUNTFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return
@@ -62,6 +69,7 @@ func (m *DiskstatCollector) Read(interval time.Duration, output chan lp.CCMetric
part_max_used := uint64(0)
scanner := bufio.NewScanner(file)
mountLoop:
for scanner.Scan() {
line := scanner.Text()
if len(line) == 0 {
@@ -77,35 +85,53 @@ func (m *DiskstatCollector) Read(interval time.Duration, output chan lp.CCMetric
if strings.Contains(linefields[1], "boot") {
continue
}
path := strings.Replace(linefields[1], `\040`, " ", -1)
mountPath := strings.Replace(linefields[1], `\040`, " ", -1)
for _, excl := range m.config.ExcludeMounts {
if strings.Contains(mountPath, excl) {
continue mountLoop
}
}
stat := syscall.Statfs_t{}
err := syscall.Statfs(path, &stat)
err := syscall.Statfs(mountPath, &stat)
if err != nil {
fmt.Println(err.Error())
return
continue
}
if stat.Blocks == 0 || stat.Bsize == 0 {
continue
}
tags := map[string]string{"type": "node", "device": linefields[0]}
total := (stat.Blocks * uint64(stat.Bsize)) / uint64(1000000000)
y, err := lp.New("disk_total", tags, m.meta, map[string]interface{}{"value": total}, time.Now())
if err == nil {
y.AddMeta("unit", "GBytes")
output <- y
if m.allowedMetrics["disk_total"] {
y, err := lp.NewMessage("disk_total", tags, m.meta, map[string]interface{}{"value": total}, time.Now())
if err == nil {
y.AddMeta("unit", "GBytes")
output <- y
}
}
free := (stat.Bfree * uint64(stat.Bsize)) / uint64(1000000000)
y, err = lp.New("disk_free", tags, m.meta, map[string]interface{}{"value": free}, time.Now())
if err == nil {
y.AddMeta("unit", "GBytes")
output <- y
if m.allowedMetrics["disk_free"] {
y, err := lp.NewMessage("disk_free", tags, m.meta, map[string]interface{}{"value": free}, time.Now())
if err == nil {
y.AddMeta("unit", "GBytes")
output <- y
}
}
perc := (100 * (total - free)) / total
if perc > part_max_used {
part_max_used = perc
if total > 0 {
perc := (100 * (total - free)) / total
if perc > part_max_used {
part_max_used = perc
}
}
}
y, err := lp.New("part_max_used", map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": part_max_used}, time.Now())
if err == nil {
y.AddMeta("unit", "percent")
output <- y
if m.allowedMetrics["part_max_used"] {
y, err := lp.NewMessage("part_max_used", map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": int(part_max_used)}, time.Now())
if err == nil {
y.AddMeta("unit", "percent")
output <- y
}
}
}

View File

@@ -6,10 +6,13 @@
"exclude_metrics": [
"disk_total"
],
"exclude_mounts": [
"slurm-tmpfs"
]
}
```
The `diskstat` collector reads data from `/proc/self/mounts` and outputs a handful **node** metrics. If a metric is not required, it can be excluded from forwarding it to the sink.
The `diskstat` collector reads data from `/proc/self/mounts` and outputs a handful **node** metrics. If a metric is not required, it can be excluded from forwarding it to the sink. Additionally, any mount point containing one of the strings specified in `exclude_mounts` will be skipped during metric collection.
Metrics per device (with `device` tag):
* `disk_total` (unit `GBytes`)

View File

@@ -5,7 +5,7 @@ import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"log"
"os/exec"
"os/user"
@@ -13,18 +13,29 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const DEFAULT_GPFS_CMD = "mmpmon"
type GpfsCollectorLastState struct {
bytesRead int64
bytesWritten int64
}
type GpfsCollector struct {
metricCollector
tags map[string]string
config struct {
Mmpmon string `json:"mmpmon_path,omitempty"`
ExcludeFilesystem []string `json:"exclude_filesystem,omitempty"`
SendBandwidths bool `json:"send_bandwidths"`
SendTotalValues bool `json:"send_total_values"`
}
skipFS map[string]struct{}
skipFS map[string]struct{}
lastTimestamp time.Time // Store time stamp of last tick to derive bandwidths
lastState map[string]GpfsCollectorLastState
}
func (m *GpfsCollector) Init(config json.RawMessage) error {
@@ -36,9 +47,10 @@ func (m *GpfsCollector) Init(config json.RawMessage) error {
var err error
m.name = "GpfsCollector"
m.setup()
m.parallel = true
// Set default mmpmon binary
m.config.Mmpmon = "/usr/lpp/mmfs/bin/mmpmon"
m.config.Mmpmon = DEFAULT_GPFS_CMD
// Read JSON configuration
if len(config) > 0 {
@@ -60,32 +72,41 @@ func (m *GpfsCollector) Init(config json.RawMessage) error {
for _, fs := range m.config.ExcludeFilesystem {
m.skipFS[fs] = struct{}{}
}
m.lastState = make(map[string]GpfsCollectorLastState)
// GPFS / IBM Spectrum Scale file system statistics can only be queried by user root
user, err := user.Current()
if err != nil {
return fmt.Errorf("Failed to get current user: %v", err)
return fmt.Errorf("failed to get current user: %v", err)
}
if user.Uid != "0" {
return fmt.Errorf("GPFS file system statistics can only be queried by user root")
}
// Check if mmpmon is in executable search path
_, err = exec.LookPath(m.config.Mmpmon)
p, err := exec.LookPath(m.config.Mmpmon)
if err != nil {
return fmt.Errorf("Failed to find mmpmon binary '%s': %v", m.config.Mmpmon, err)
return fmt.Errorf("failed to find mmpmon binary '%s': %v", m.config.Mmpmon, err)
}
m.config.Mmpmon = p
m.init = true
return nil
}
func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Check if already initialized
if !m.init {
return
}
// Current time stamp
now := time.Now()
// time difference to last time stamp
timeDiff := now.Sub(m.lastTimestamp).Seconds()
// Save current timestamp
m.lastTimestamp = now
// mmpmon:
// -p: generate output that can be parsed
// -s: suppress the prompt on input
@@ -98,8 +119,8 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
cmd.Stderr = cmdStderr
err := cmd.Run()
if err != nil {
dataStdErr, _ := ioutil.ReadAll(cmdStderr)
dataStdOut, _ := ioutil.ReadAll(cmdStdout)
dataStdErr, _ := io.ReadAll(cmdStderr)
dataStdOut, _ := io.ReadAll(cmdStdout)
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to execute command \"%s\": %v\n", cmd.String(), err),
@@ -144,8 +165,19 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
continue
}
// Add filesystem tag
m.tags["filesystem"] = filesystem
// Create initial last state
if m.config.SendBandwidths {
if _, ok := m.lastState[filesystem]; !ok {
m.lastState[filesystem] = GpfsCollectorLastState{
bytesRead: -1,
bytesWritten: -1,
}
}
}
// return code
rc, err := strconv.Atoi(key_value["_rc_"])
if err != nil {
@@ -185,9 +217,37 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert bytes read '%s' to int64: %v", key_value["_br_"], err))
continue
}
if y, err := lp.New("gpfs_bytes_read", m.tags, m.meta, map[string]interface{}{"value": bytesRead}, timestamp); err == nil {
if y, err :=
lp.NewMessage(
"gpfs_bytes_read",
m.tags,
m.meta,
map[string]interface{}{
"value": bytesRead,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes")
output <- y
}
if m.config.SendBandwidths {
if lastBytesRead := m.lastState[filesystem].bytesRead; lastBytesRead >= 0 {
bwRead := float64(bytesRead-lastBytesRead) / timeDiff
if y, err :=
lp.NewMessage(
"gpfs_bw_read",
m.tags,
m.meta,
map[string]interface{}{
"value": bwRead,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes/sec")
output <- y
}
}
}
// bytes written
bytesWritten, err := strconv.ParseInt(key_value["_bw_"], 10, 64)
@@ -197,9 +257,44 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert bytes written '%s' to int64: %v", key_value["_bw_"], err))
continue
}
if y, err := lp.New("gpfs_bytes_written", m.tags, m.meta, map[string]interface{}{"value": bytesWritten}, timestamp); err == nil {
if y, err :=
lp.NewMessage(
"gpfs_bytes_written",
m.tags,
m.meta,
map[string]interface{}{
"value": bytesWritten,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes")
output <- y
}
if m.config.SendBandwidths {
if lastBytesWritten := m.lastState[filesystem].bytesRead; lastBytesWritten >= 0 {
bwWrite := float64(bytesWritten-lastBytesWritten) / timeDiff
if y, err :=
lp.NewMessage(
"gpfs_bw_write",
m.tags,
m.meta,
map[string]interface{}{
"value": bwWrite,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes/sec")
output <- y
}
}
}
if m.config.SendBandwidths {
m.lastState[filesystem] = GpfsCollectorLastState{
bytesRead: bytesRead,
bytesWritten: bytesWritten,
}
}
// number of opens
numOpens, err := strconv.ParseInt(key_value["_oc_"], 10, 64)
@@ -209,7 +304,7 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert number of opens '%s' to int64: %v", key_value["_oc_"], err))
continue
}
if y, err := lp.New("gpfs_num_opens", m.tags, m.meta, map[string]interface{}{"value": numOpens}, timestamp); err == nil {
if y, err := lp.NewMessage("gpfs_num_opens", m.tags, m.meta, map[string]interface{}{"value": numOpens}, timestamp); err == nil {
output <- y
}
@@ -221,7 +316,7 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert number of closes: '%s' to int64: %v", key_value["_cc_"], err))
continue
}
if y, err := lp.New("gpfs_num_closes", m.tags, m.meta, map[string]interface{}{"value": numCloses}, timestamp); err == nil {
if y, err := lp.NewMessage("gpfs_num_closes", m.tags, m.meta, map[string]interface{}{"value": numCloses}, timestamp); err == nil {
output <- y
}
@@ -233,7 +328,7 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert number of reads: '%s' to int64: %v", key_value["_rdc_"], err))
continue
}
if y, err := lp.New("gpfs_num_reads", m.tags, m.meta, map[string]interface{}{"value": numReads}, timestamp); err == nil {
if y, err := lp.NewMessage("gpfs_num_reads", m.tags, m.meta, map[string]interface{}{"value": numReads}, timestamp); err == nil {
output <- y
}
@@ -245,7 +340,7 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert number of writes: '%s' to int64: %v", key_value["_wc_"], err))
continue
}
if y, err := lp.New("gpfs_num_writes", m.tags, m.meta, map[string]interface{}{"value": numWrites}, timestamp); err == nil {
if y, err := lp.NewMessage("gpfs_num_writes", m.tags, m.meta, map[string]interface{}{"value": numWrites}, timestamp); err == nil {
output <- y
}
@@ -257,7 +352,7 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert number of read directories: '%s' to int64: %v", key_value["_dir_"], err))
continue
}
if y, err := lp.New("gpfs_num_readdirs", m.tags, m.meta, map[string]interface{}{"value": numReaddirs}, timestamp); err == nil {
if y, err := lp.NewMessage("gpfs_num_readdirs", m.tags, m.meta, map[string]interface{}{"value": numReaddirs}, timestamp); err == nil {
output <- y
}
@@ -269,9 +364,50 @@ func (m *GpfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
fmt.Sprintf("Read(): Failed to convert number of inode updates: '%s' to int: %v", key_value["_iu_"], err))
continue
}
if y, err := lp.New("gpfs_num_inode_updates", m.tags, m.meta, map[string]interface{}{"value": numInodeUpdates}, timestamp); err == nil {
if y, err := lp.NewMessage("gpfs_num_inode_updates", m.tags, m.meta, map[string]interface{}{"value": numInodeUpdates}, timestamp); err == nil {
output <- y
}
// Total values
if m.config.SendTotalValues {
bytesTotal := bytesRead + bytesWritten
if y, err :=
lp.NewMessage("gpfs_bytes_total",
m.tags,
m.meta,
map[string]interface{}{
"value": bytesTotal,
},
timestamp,
); err == nil {
y.AddMeta("unit", "bytes")
output <- y
}
iops := numReads + numWrites
if y, err :=
lp.NewMessage("gpfs_iops",
m.tags,
m.meta,
map[string]interface{}{
"value": iops,
},
timestamp,
); err == nil {
output <- y
}
metaops := numInodeUpdates + numCloses + numOpens + numReaddirs
if y, err :=
lp.NewMessage("gpfs_metaops",
m.tags,
m.meta,
map[string]interface{}{
"value": metaops,
},
timestamp,
); err == nil {
output <- y
}
}
}
}

View File

@@ -5,7 +5,9 @@
"mmpmon_path": "/path/to/mmpmon",
"exclude_filesystem": [
"fs1"
]
],
"send_bandwidths": true,
"send_total_values": true
}
```
@@ -16,15 +18,22 @@ The reported filesystems can be filtered with the `exclude_filesystem` option
in the configuration.
The path to the `mmpmon` command can be configured with the `mmpmon_path` option
in the configuration.
in the configuration. If nothing is set, the collector searches in `$PATH` for `mmpmon`.
Metrics:
* `bytes_read`
* `gpfs_bytes_read`
* `gpfs_bytes_written`
* `gpfs_num_opens`
* `gpfs_num_closes`
* `gpfs_num_reads`
* `gpfs_num_writes`
* `gpfs_num_readdirs`
* `gpfs_num_inode_updates`
* `gpfs_bytes_total = gpfs_bytes_read + gpfs_bytes_written` (if `send_total_values == true`)
* `gpfs_iops = gpfs_num_reads + gpfs_num_writes` (if `send_total_values == true`)
* `gpfs_metaops = gpfs_num_inode_updates + gpfs_num_closes + gpfs_num_opens + gpfs_num_readdirs` (if `send_total_values == true`)
* `gpfs_bw_read` (if `send_bandwidths == true`)
* `gpfs_bw_write` (if `send_bandwidths == true`)
The collector adds a `filesystem` tag to all metrics

View File

@@ -2,11 +2,10 @@ package collectors
import (
"fmt"
"io/ioutil"
"os"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
"golang.org/x/sys/unix"
"encoding/json"
@@ -16,22 +15,37 @@ import (
"time"
)
const IB_BASEPATH = `/sys/class/infiniband/`
const IB_BASEPATH = "/sys/class/infiniband/"
type InfinibandCollectorMetric struct {
name string
path string
unit string
scale int64
addToIBTotal bool
addToIBTotalPkgs bool
currentState int64
lastState int64
}
type InfinibandCollectorInfo struct {
LID string // IB local Identifier (LID)
device string // IB device
port string // IB device port
portCounterFiles map[string]string // mapping counter name -> sysfs file
tagSet map[string]string // corresponding tag list
LID string // IB local Identifier (LID)
device string // IB device
port string // IB device port
portCounterFiles []InfinibandCollectorMetric // mapping counter name -> InfinibandCollectorMetric
tagSet map[string]string // corresponding tag list
}
type InfinibandCollector struct {
metricCollector
config struct {
ExcludeDevices []string `json:"exclude_devices,omitempty"` // IB device to exclude e.g. mlx5_0
ExcludeDevices []string `json:"exclude_devices,omitempty"` // IB device to exclude e.g. mlx5_0
SendAbsoluteValues bool `json:"send_abs_values"` // Send absolut values as read from sys filesystem
SendTotalValues bool `json:"send_total_values"` // Send computed total values
SendDerivedValues bool `json:"send_derived_values"` // Send derived values e.g. rates
}
info []*InfinibandCollectorInfo
info []InfinibandCollectorInfo
lastTimestamp time.Time // Store time stamp of last tick to derive bandwidths
}
// Init initializes the Infiniband collector by walking through files below IB_BASEPATH
@@ -45,10 +59,16 @@ func (m *InfinibandCollector) Init(config json.RawMessage) error {
var err error
m.name = "InfinibandCollector"
m.setup()
m.parallel = true
m.meta = map[string]string{
"source": m.name,
"group": "Network",
}
// Set default configuration,
m.config.SendAbsoluteValues = true
m.config.SendDerivedValues = false
// Read configuration file, allow overwriting default config
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
@@ -60,16 +80,16 @@ func (m *InfinibandCollector) Init(config json.RawMessage) error {
globPattern := filepath.Join(IB_BASEPATH, "*", "ports", "*")
ibDirs, err := filepath.Glob(globPattern)
if err != nil {
return fmt.Errorf("Unable to glob files with pattern %s: %v", globPattern, err)
return fmt.Errorf("unable to glob files with pattern %s: %v", globPattern, err)
}
if ibDirs == nil {
return fmt.Errorf("Unable to find any directories with pattern %s", globPattern)
return fmt.Errorf("unable to find any directories with pattern %s", globPattern)
}
for _, path := range ibDirs {
// Skip, when no LID is assigned
line, err := ioutil.ReadFile(filepath.Join(path, "lid"))
line, err := os.ReadFile(filepath.Join(path, "lid"))
if err != nil {
continue
}
@@ -97,21 +117,49 @@ func (m *InfinibandCollector) Init(config json.RawMessage) error {
// Check access to counter files
countersDir := filepath.Join(path, "counters")
portCounterFiles := map[string]string{
"ib_recv": filepath.Join(countersDir, "port_rcv_data"),
"ib_xmit": filepath.Join(countersDir, "port_xmit_data"),
"ib_recv_pkts": filepath.Join(countersDir, "port_rcv_packets"),
"ib_xmit_pkts": filepath.Join(countersDir, "port_xmit_packets"),
portCounterFiles := []InfinibandCollectorMetric{
{
name: "ib_recv",
path: filepath.Join(countersDir, "port_rcv_data"),
unit: "bytes",
scale: 4,
addToIBTotal: true,
lastState: -1,
},
{
name: "ib_xmit",
path: filepath.Join(countersDir, "port_xmit_data"),
unit: "bytes",
scale: 4,
addToIBTotal: true,
lastState: -1,
},
{
name: "ib_recv_pkts",
path: filepath.Join(countersDir, "port_rcv_packets"),
unit: "packets",
scale: 1,
addToIBTotalPkgs: true,
lastState: -1,
},
{
name: "ib_xmit_pkts",
path: filepath.Join(countersDir, "port_xmit_packets"),
unit: "packets",
scale: 1,
addToIBTotalPkgs: true,
lastState: -1,
},
}
for _, counterFile := range portCounterFiles {
err := unix.Access(counterFile, unix.R_OK)
for _, counter := range portCounterFiles {
err := unix.Access(counter.path, unix.R_OK)
if err != nil {
return fmt.Errorf("Unable to access %s: %v", counterFile, err)
return fmt.Errorf("unable to access %s: %v", counter.path, err)
}
}
m.info = append(m.info,
&InfinibandCollectorInfo{
InfinibandCollectorInfo{
LID: LID,
device: device,
port: port,
@@ -126,7 +174,7 @@ func (m *InfinibandCollector) Init(config json.RawMessage) error {
}
if len(m.info) == 0 {
return fmt.Errorf("Found no IB devices")
return fmt.Errorf("found no IB devices")
}
m.init = true
@@ -134,36 +182,127 @@ func (m *InfinibandCollector) Init(config json.RawMessage) error {
}
// Read reads Infiniband counter files below IB_BASEPATH
func (m *InfinibandCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *InfinibandCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Check if already initialized
if !m.init {
return
}
// Current time stamp
now := time.Now()
for _, info := range m.info {
for counterName, counterFile := range info.portCounterFiles {
line, err := ioutil.ReadFile(counterFile)
// time difference to last time stamp
timeDiff := now.Sub(m.lastTimestamp).Seconds()
// Save current timestamp
m.lastTimestamp = now
for i := range m.info {
info := &m.info[i]
var ib_total, ib_total_pkts int64
for i := range info.portCounterFiles {
counterDef := &info.portCounterFiles[i]
// Read counter file
line, err := os.ReadFile(counterDef.path)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to read from file '%s': %v", counterFile, err))
fmt.Sprintf("Read(): Failed to read from file '%s': %v", counterDef.path, err))
continue
}
data := strings.TrimSpace(string(line))
// convert counter to int64
v, err := strconv.ParseInt(data, 10, 64)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to convert Infininiband metrice %s='%s' to int64: %v", counterName, data, err))
fmt.Sprintf("Read(): Failed to convert Infininiband metrice %s='%s' to int64: %v", counterDef.name, data, err))
continue
}
if y, err := lp.New(counterName, info.tagSet, m.meta, map[string]interface{}{"value": v}, now); err == nil {
output <- y
// Scale raw value
v *= counterDef.scale
// Save current state
counterDef.currentState = v
// Send absolut values
if m.config.SendAbsoluteValues {
if y, err :=
lp.NewMessage(
counterDef.name,
info.tagSet,
m.meta,
map[string]interface{}{
"value": counterDef.currentState,
},
now); err == nil {
y.AddMeta("unit", counterDef.unit)
output <- y
}
}
// Send derived values
if m.config.SendDerivedValues {
if counterDef.lastState >= 0 {
rate := float64((counterDef.currentState - counterDef.lastState)) / timeDiff
if y, err :=
lp.NewMessage(
counterDef.name+"_bw",
info.tagSet,
m.meta,
map[string]interface{}{
"value": rate,
},
now); err == nil {
y.AddMeta("unit", counterDef.unit+"/sec")
output <- y
}
}
counterDef.lastState = counterDef.currentState
}
// Sum up total values
if m.config.SendTotalValues {
switch {
case counterDef.addToIBTotal:
ib_total += counterDef.currentState
case counterDef.addToIBTotalPkgs:
ib_total_pkts += counterDef.currentState
}
}
}
// Send total values
if m.config.SendTotalValues {
if y, err :=
lp.NewMessage(
"ib_total",
info.tagSet,
m.meta,
map[string]interface{}{
"value": ib_total,
},
now); err == nil {
y.AddMeta("unit", "bytes")
output <- y
}
if y, err :=
lp.NewMessage(
"ib_total_pkts",
info.tagSet,
m.meta,
map[string]interface{}{
"value": ib_total_pkts,
},
now); err == nil {
y.AddMeta("unit", "packets")
output <- y
}
}
}
}

View File

@@ -5,7 +5,9 @@
"ibstat": {
"exclude_devices": [
"mlx4"
]
],
"send_abs_values": true,
"send_derived_values": true
}
```
@@ -15,12 +17,19 @@ LID file (`/sys/class/infiniband/<dev>/ports/<port>/lid`)
The devices can be filtered with the `exclude_devices` option in the configuration.
For each found LID the collector reads data through the sysfs files below `/sys/class/infiniband/<device>`.
For each found LID the collector reads data through the sysfs files below `/sys/class/infiniband/<device>`. (See: <https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-class-infiniband>)
Metrics:
* `ib_recv`
* `ib_xmit`
* `ib_recv_pkts`
* `ib_xmit_pkts`
* `ib_total = ib_recv + ib_xmit` (if `send_total_values == true`)
* `ib_total_pkts = ib_recv_pkts + ib_xmit_pkts` (if `send_total_values == true`)
* `ib_recv_bw` (if `send_derived_values == true`)
* `ib_xmit_bw` (if `send_derived_values == true`)
* `ib_recv_pkts_bw` (if `send_derived_values == true`)
* `ib_xmit_pkts_bw` (if `send_derived_values == true`)
The collector adds a `device` tag to all metrics

View File

@@ -1,232 +0,0 @@
package collectors
import (
"fmt"
"io/ioutil"
"log"
"os/exec"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
// "os"
"encoding/json"
"errors"
"path/filepath"
"strconv"
"strings"
"time"
)
const PERFQUERY = `/usr/sbin/perfquery`
type InfinibandPerfQueryCollector struct {
metricCollector
tags map[string]string
lids map[string]map[string]string
config struct {
ExcludeDevices []string `json:"exclude_devices,omitempty"`
PerfQueryPath string `json:"perfquery_path"`
}
}
func (m *InfinibandPerfQueryCollector) Init(config json.RawMessage) error {
var err error
m.name = "InfinibandCollectorPerfQuery"
m.setup()
m.meta = map[string]string{"source": m.name, "group": "Network"}
m.tags = map[string]string{"type": "node"}
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
return err
}
}
if len(m.config.PerfQueryPath) == 0 {
path, err := exec.LookPath("perfquery")
if err == nil {
m.config.PerfQueryPath = path
}
}
m.lids = make(map[string]map[string]string)
p := fmt.Sprintf("%s/*/ports/*/lid", string(IB_BASEPATH))
files, err := filepath.Glob(p)
if err != nil {
return err
}
for _, f := range files {
lid, err := ioutil.ReadFile(f)
if err == nil {
plist := strings.Split(strings.Replace(f, string(IB_BASEPATH), "", -1), "/")
skip := false
for _, d := range m.config.ExcludeDevices {
if d == plist[0] {
skip = true
}
}
if !skip {
m.lids[plist[0]] = make(map[string]string)
m.lids[plist[0]][plist[2]] = string(lid)
}
}
}
for _, ports := range m.lids {
for port, lid := range ports {
args := fmt.Sprintf("-r %s %s 0xf000", lid, port)
command := exec.Command(m.config.PerfQueryPath, args)
command.Wait()
_, err := command.Output()
if err != nil {
return fmt.Errorf("Failed to execute %s: %v", m.config.PerfQueryPath, err)
}
}
}
if len(m.lids) == 0 {
return errors.New("No usable IB devices")
}
m.init = true
return nil
}
func (m *InfinibandPerfQueryCollector) doPerfQuery(cmd string, dev string, lid string, port string, tags map[string]string, output chan lp.CCMetric) error {
args := fmt.Sprintf("-r %s %s 0xf000", lid, port)
command := exec.Command(cmd, args)
command.Wait()
stdout, err := command.Output()
if err != nil {
log.Print(err)
return err
}
ll := strings.Split(string(stdout), "\n")
for _, line := range ll {
if strings.HasPrefix(line, "PortRcvData") || strings.HasPrefix(line, "RcvData") {
lv := strings.Fields(line)
v, err := strconv.ParseFloat(lv[1], 64)
if err == nil {
y, err := lp.New("ib_recv", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
if strings.HasPrefix(line, "PortXmitData") || strings.HasPrefix(line, "XmtData") {
lv := strings.Fields(line)
v, err := strconv.ParseFloat(lv[1], 64)
if err == nil {
y, err := lp.New("ib_xmit", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
if strings.HasPrefix(line, "PortRcvPkts") || strings.HasPrefix(line, "RcvPkts") {
lv := strings.Fields(line)
v, err := strconv.ParseFloat(lv[1], 64)
if err == nil {
y, err := lp.New("ib_recv_pkts", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
if strings.HasPrefix(line, "PortXmitPkts") || strings.HasPrefix(line, "XmtPkts") {
lv := strings.Fields(line)
v, err := strconv.ParseFloat(lv[1], 64)
if err == nil {
y, err := lp.New("ib_xmit_pkts", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
if strings.HasPrefix(line, "PortRcvPkts") || strings.HasPrefix(line, "RcvPkts") {
lv := strings.Fields(line)
v, err := strconv.ParseFloat(lv[1], 64)
if err == nil {
y, err := lp.New("ib_recv_pkts", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
if strings.HasPrefix(line, "PortXmitPkts") || strings.HasPrefix(line, "XmtPkts") {
lv := strings.Fields(line)
v, err := strconv.ParseFloat(lv[1], 64)
if err == nil {
y, err := lp.New("ib_xmit_pkts", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
}
return nil
}
func (m *InfinibandPerfQueryCollector) Read(interval time.Duration, output chan lp.CCMetric) {
if m.init {
for dev, ports := range m.lids {
for port, lid := range ports {
tags := map[string]string{
"type": "node",
"device": dev,
"port": port,
"lid": lid}
path := fmt.Sprintf("%s/%s/ports/%s/counters/", string(IB_BASEPATH), dev, port)
buffer, err := ioutil.ReadFile(fmt.Sprintf("%s/port_rcv_data", path))
if err == nil {
data := strings.Replace(string(buffer), "\n", "", -1)
v, err := strconv.ParseFloat(data, 64)
if err == nil {
y, err := lp.New("ib_recv", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
buffer, err = ioutil.ReadFile(fmt.Sprintf("%s/port_xmit_data", path))
if err == nil {
data := strings.Replace(string(buffer), "\n", "", -1)
v, err := strconv.ParseFloat(data, 64)
if err == nil {
y, err := lp.New("ib_xmit", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
buffer, err = ioutil.ReadFile(fmt.Sprintf("%s/port_rcv_packets", path))
if err == nil {
data := strings.Replace(string(buffer), "\n", "", -1)
v, err := strconv.ParseFloat(data, 64)
if err == nil {
y, err := lp.New("ib_recv_pkts", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
buffer, err = ioutil.ReadFile(fmt.Sprintf("%s/port_xmit_packets", path))
if err == nil {
data := strings.Replace(string(buffer), "\n", "", -1)
v, err := strconv.ParseFloat(data, 64)
if err == nil {
y, err := lp.New("ib_xmit_pkts", tags, m.meta, map[string]interface{}{"value": float64(v)}, time.Now())
if err == nil {
output <- y
}
}
}
}
}
}
}
func (m *InfinibandPerfQueryCollector) Close() {
m.init = false
}

View File

@@ -1,28 +0,0 @@
## `ibstat_perfquery` collector
```json
"ibstat_perfquery": {
"perfquery_path": "/path/to/perfquery",
"exclude_devices": [
"mlx4"
]
}
```
The `ibstat_perfquery` collector includes all Infiniband devices that can be
found below `/sys/class/infiniband/` and where any of the ports provides a
LID file (`/sys/class/infiniband/<dev>/ports/<port>/lid`)
The devices can be filtered with the `exclude_devices` option in the configuration.
For each found LID the collector calls the `perfquery` command. The path to the
`perfquery` command can be configured with the `perfquery_path` option in the configuration
Metrics:
* `ib_recv`
* `ib_xmit`
* `ib_recv_pkts`
* `ib_xmit_pkts`
The collector adds a `device` tag to all metrics

View File

@@ -2,24 +2,24 @@ package collectors
import (
"bufio"
"os"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
// "log"
"encoding/json"
"errors"
"os"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// Konstante für den Pfad zu /proc/diskstats
const IOSTATFILE = `/proc/diskstats`
const IOSTAT_SYSFSPATH = `/sys/block`
type IOstatCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
// Neues Feld zum Ausschließen von Devices per JSON-Konfiguration
ExcludeDevices []string `json:"exclude_devices,omitempty"`
}
type IOstatCollectorEntry struct {
@@ -37,6 +37,7 @@ type IOstatCollector struct {
func (m *IOstatCollector) Init(config json.RawMessage) error {
var err error
m.name = "IOstatCollector"
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Disk"}
m.setup()
if len(config) > 0 {
@@ -75,7 +76,7 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
if len(m.matches) == 0 {
return errors.New("no metrics to collect")
}
file, err := os.Open(string(IOSTATFILE))
file, err := os.Open(IOSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return err
@@ -86,17 +87,24 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
for scanner.Scan() {
line := scanner.Text()
linefields := strings.Fields(line)
if len(linefields) < 3 {
continue
}
device := linefields[2]
if strings.Contains(device, "loop") {
continue
}
if _, skip := stringArrayContains(m.config.ExcludeDevices, device); skip {
continue
}
values := make(map[string]int64)
for m := range m.matches {
values[m] = 0
}
m.devices[device] = IOstatCollectorEntry{
tags: map[string]string{
"device": linefields[2],
"device": device,
"type": "node",
},
lastValues: values,
@@ -106,12 +114,12 @@ func (m *IOstatCollector) Init(config json.RawMessage) error {
return err
}
func (m *IOstatCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *IOstatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
file, err := os.Open(string(IOSTATFILE))
file, err := os.Open(IOSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return
@@ -125,10 +133,16 @@ func (m *IOstatCollector) Read(interval time.Duration, output chan lp.CCMetric)
continue
}
linefields := strings.Fields(line)
if len(linefields) < 3 {
continue
}
device := linefields[2]
if strings.Contains(device, "loop") {
continue
}
if _, skip := stringArrayContains(m.config.ExcludeDevices, device); skip {
continue
}
if _, ok := m.devices[device]; !ok {
continue
}
@@ -138,7 +152,7 @@ func (m *IOstatCollector) Read(interval time.Duration, output chan lp.CCMetric)
x, err := strconv.ParseInt(linefields[idx], 0, 64)
if err == nil {
diff := x - entry.lastValues[name]
y, err := lp.New(name, entry.tags, m.meta, map[string]interface{}{"value": int(diff)}, time.Now())
y, err := lp.NewMessage(name, entry.tags, m.meta, map[string]interface{}{"value": int(diff)}, time.Now())
if err == nil {
output <- y
}

View File

@@ -4,12 +4,17 @@
```json
"iostat": {
"exclude_metrics": [
"read_ms"
"io_read_ms"
],
"exclude_devices": [
"nvme0n1p1",
"nvme0n1p2",
"md127"
]
}
```
The `iostat` collector reads data from `/proc/diskstats` and outputs a handful **node** metrics. If a metric is not required, it can be excluded from forwarding it to the sink.
The `iostat` collector reads data from `/proc/diskstats` and outputs a handful **node** metrics. If a metric or device is not required, it can be excluded from forwarding it to the sink.
Metrics:
* `io_reads`

View File

@@ -1,85 +1,116 @@
package collectors
import (
"bufio"
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"os"
"os/exec"
"strconv"
"strings"
"time"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const IPMITOOL_PATH = `ipmitool`
const IPMISENSORS_PATH = `ipmi-sensors`
type IpmiCollectorConfig struct {
ExcludeDevices []string `json:"exclude_devices"`
IpmitoolPath string `json:"ipmitool_path"`
IpmisensorsPath string `json:"ipmisensors_path"`
}
type IpmiCollector struct {
metricCollector
//tags map[string]string
//matches map[string]string
config IpmiCollectorConfig
config struct {
ExcludeDevices []string `json:"exclude_devices"`
IpmitoolPath string `json:"ipmitool_path"`
IpmisensorsPath string `json:"ipmisensors_path"`
}
ipmitool string
ipmisensors string
}
func (m *IpmiCollector) Init(config json.RawMessage) error {
// Check if already initialized
if m.init {
return nil
}
m.name = "IpmiCollector"
m.setup()
m.meta = map[string]string{"source": m.name, "group": "IPMI"}
m.config.IpmitoolPath = string(IPMITOOL_PATH)
m.config.IpmisensorsPath = string(IPMISENSORS_PATH)
m.ipmitool = ""
m.ipmisensors = ""
m.parallel = true
m.meta = map[string]string{
"source": m.name,
"group": "IPMI",
}
// default path to IPMI tools
m.config.IpmitoolPath = "ipmitool"
m.config.IpmisensorsPath = "ipmi-sensors"
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
return err
}
}
// Check if executables ipmitool or ipmisensors are found
p, err := exec.LookPath(m.config.IpmitoolPath)
if err == nil {
m.ipmitool = p
command := exec.Command(p)
err := command.Run()
if err != nil {
cclog.ComponentError(m.name, fmt.Sprintf("Failed to execute %s: %v", p, err.Error()))
m.ipmitool = ""
} else {
m.ipmitool = p
}
}
p, err = exec.LookPath(m.config.IpmisensorsPath)
if err == nil {
m.ipmisensors = p
command := exec.Command(p)
err := command.Run()
if err != nil {
cclog.ComponentError(m.name, fmt.Sprintf("Failed to execute %s: %v", p, err.Error()))
m.ipmisensors = ""
} else {
m.ipmisensors = p
}
}
if len(m.ipmitool) == 0 && len(m.ipmisensors) == 0 {
return errors.New("No IPMI reader found")
return errors.New("no usable IPMI reader found")
}
m.init = true
return nil
}
func (m *IpmiCollector) readIpmiTool(cmd string, output chan lp.CCMetric) {
func (m *IpmiCollector) readIpmiTool(cmd string, output chan lp.CCMessage) {
// Setup ipmitool command
command := exec.Command(cmd, "sensor")
command.Wait()
stdout, err := command.Output()
if err != nil {
log.Print(err)
stdout, _ := command.StdoutPipe()
errBuf := new(bytes.Buffer)
command.Stderr = errBuf
// start command
if err := command.Start(); err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("readIpmiTool(): Failed to start command \"%s\": %v", command.String(), err),
)
return
}
ll := strings.Split(string(stdout), "\n")
for _, line := range ll {
lv := strings.Split(line, "|")
// Read command output
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
lv := strings.Split(scanner.Text(), "|")
if len(lv) < 3 {
continue
}
v, err := strconv.ParseFloat(strings.Trim(lv[1], " "), 64)
v, err := strconv.ParseFloat(strings.TrimSpace(lv[1]), 64)
if err == nil {
name := strings.ToLower(strings.Replace(strings.Trim(lv[0], " "), " ", "_", -1))
unit := strings.Trim(lv[2], " ")
name := strings.ToLower(strings.Replace(strings.TrimSpace(lv[0]), " ", "_", -1))
unit := strings.TrimSpace(lv[2])
if unit == "Volts" {
unit = "Volts"
} else if unit == "degrees C" {
@@ -90,16 +121,27 @@ func (m *IpmiCollector) readIpmiTool(cmd string, output chan lp.CCMetric) {
unit = "Watts"
}
y, err := lp.New(name, map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": v}, time.Now())
y, err := lp.NewMessage(name, map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": v}, time.Now())
if err == nil {
y.AddMeta("unit", unit)
output <- y
}
}
}
// Wait for command end
if err := command.Wait(); err != nil {
errMsg, _ := io.ReadAll(errBuf)
cclog.ComponentError(
m.name,
fmt.Sprintf("readIpmiTool(): Failed to wait for the end of command \"%s\": %v\n", command.String(), err),
)
cclog.ComponentError(m.name, fmt.Sprintf("readIpmiTool(): command stderr: \"%s\"\n", strings.TrimSpace(string(errMsg))))
return
}
}
func (m *IpmiCollector) readIpmiSensors(cmd string, output chan lp.CCMetric) {
func (m *IpmiCollector) readIpmiSensors(cmd string, output chan lp.CCMessage) {
command := exec.Command(cmd, "--comma-separated-output", "--sdr-cache-recreate")
command.Wait()
@@ -117,7 +159,7 @@ func (m *IpmiCollector) readIpmiSensors(cmd string, output chan lp.CCMetric) {
v, err := strconv.ParseFloat(lv[3], 64)
if err == nil {
name := strings.ToLower(strings.Replace(lv[1], " ", "_", -1))
y, err := lp.New(name, map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": v}, time.Now())
y, err := lp.NewMessage(name, map[string]string{"type": "node"}, m.meta, map[string]interface{}{"value": v}, time.Now())
if err == nil {
if len(lv) > 4 {
y.AddMeta("unit", lv[4])
@@ -129,17 +171,17 @@ func (m *IpmiCollector) readIpmiSensors(cmd string, output chan lp.CCMetric) {
}
}
func (m *IpmiCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *IpmiCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Check if already initialized
if !m.init {
return
}
if len(m.config.IpmitoolPath) > 0 {
_, err := os.Stat(m.config.IpmitoolPath)
if err == nil {
m.readIpmiTool(m.config.IpmitoolPath, output)
}
m.readIpmiTool(m.config.IpmitoolPath, output)
} else if len(m.config.IpmisensorsPath) > 0 {
_, err := os.Stat(m.config.IpmisensorsPath)
if err == nil {
m.readIpmiSensors(m.config.IpmisensorsPath, output)
}
m.readIpmiSensors(m.config.IpmisensorsPath, output)
}
}

View File

@@ -8,9 +8,6 @@
}
```
The `ipmistat` collector reads data from `ipmitool` (`ipmitool sensor`) or `ipmi-sensors` (`ipmi-sensors --sdr-cache-recreate --comma-separated-output`).
The `ipmistat` collector reads data from `ipmitool` (`ipmitool sensor`) or `ipmi-sensors` (`ipmi-sensors --sdr-cache-recreate --comma-separated-output`).
The metrics depend on the output of the underlying tools but contain temperature, power and energy metrics.

File diff suppressed because it is too large Load Diff

View File

@@ -3,37 +3,152 @@
The `likwid` collector is probably the most complicated collector. The LIKWID library is included as static library with *direct* access mode. The *direct* access mode is suitable if the daemon is executed by a root user. The static library does not contain the performance groups, so all information needs to be provided in the configuration.
The `likwid` configuration consists of two parts, the "eventsets" and "globalmetrics":
- An event set list itself has two parts, the "events" and a set of derivable "metrics". Each of the "events" is a counter:event pair in LIKWID's syntax. The "metrics" are a list of formulas to derive the metric value from the measurements of the "events". Each metric has a name, the formula, a scope and a publish flag. A counter names can be used like variables in the formulas, so `PMC0+PMC1` sums the measurements for the both events configured in the counters `PMC0` and `PMC1`. The scope tells the Collector whether it is a metric for each hardware thread (`cpu`) or each CPU socket (`socket`). The last one is the publishing flag. It tells the collector whether a metric should be sent to the router.
- The global metrics are metrics which require data from all event set measurements to be derived. The inputs are the metrics in the event sets. Similar to the metrics in the event sets, the global metrics are defined by a name, a formula, a scope and a publish flag. See event set metrics for details. The only difference is that there is no access to the raw event measurements anymore but only to the metrics. So, the idea is to derive a metric in the "eventsets" section and reuse it in the "globalmetrics" part. If you need a metric only for deriving the global metrics, disable forwarding of the event set metrics. **Be aware** that the combination might be misleading because the "behavior" of a metric changes over time and the multiple measurements might count different computing phases.
```json
"likwid": {
"force_overwrite" : false,
"invalid_to_zero" : false,
"liblikwid_path" : "/path/to/liblikwid.so",
"accessdaemon_path" : "/folder/that/contains/likwid-accessD",
"access_mode" : "direct or accessdaemon or perf_event",
"lockfile_path" : "/var/run/likwid.lock",
"eventsets": [
{
"events" : {
"COUNTER0": "EVENT0",
"COUNTER1": "EVENT1"
},
"metrics" : [
{
"name": "sum_01",
"calc": "COUNTER0 + COUNTER1",
"publish": false,
"unit": "myunit",
"type": "hwthread"
}
]
}
],
"globalmetrics" : [
{
"name": "global_sum",
"calc": "sum_01",
"publish": true,
"unit": "myunit",
"type": "hwthread"
}
]
}
```
The `likwid` configuration consists of two parts, the `eventsets` and `globalmetrics`:
- An event set list itself has two parts, the `events` and a set of derivable `metrics`. Each of the `events` is a `counter:event` pair in LIKWID's syntax. The `metrics` are a list of formulas to derive the metric value from the measurements of the `events`' values. Each metric has a name, the formula, a type and a publish flag. There is an optional `unit` field. Counter names can be used like variables in the formulas, so `PMC0+PMC1` sums the measurements for the both events configured in the counters `PMC0` and `PMC1`. You can optionally use `time` for the measurement time and `inverseClock` for `1.0/baseCpuFrequency`. The type tells the LikwidCollector whether it is a metric for each hardware thread (`cpu`) or each CPU socket (`socket`). You may specify a unit for the metric with `unit`. The last one is the publishing flag. It tells the LikwidCollector whether a metric should be sent to the router or is only used internally to compute a global metric.
- The `globalmetrics` are metrics which require data from multiple event set measurements to be derived. The inputs are the metrics in the event sets. Similar to the metrics in the event sets, the global metrics are defined by a name, a formula, a type and a publish flag. See event set metrics for details. The only difference is that there is no access to the raw event measurements anymore but only to the metrics. Also `time` and `inverseClock` cannot be used anymore. So, the idea is to derive a metric in the `eventsets` section and reuse it in the `globalmetrics` part. If you need a metric only for deriving the global metrics, disable forwarding of the event set metrics (`"publish": false`). **Be aware** that the combination might be misleading because the "behavior" of a metric changes over time and the multiple measurements might count different computing phases. Similar to the metrics in the eventset, you can specify a metric unit with the `unit` field.
Additional options:
- `force_overwrite`: Same as setting `LIKWID_FORCE=1`. In case counters are already in-use, LIKWID overwrites their configuration to do its measurements
- `invalid_to_zero`: In some cases, the calculations result in `NaN` or `Inf`. With this option, all `NaN` and `Inf` values are replaces with `0.0`.
- `invalid_to_zero`: In some cases, the calculations result in `NaN` or `Inf`. With this option, all `NaN` and `Inf` values are replaces with `0.0`. See below in [seperate section](./likwidMetric.md#invalid_to_zero-option)
- `access_mode`: Specify LIKWID access mode: `direct` for direct register access as root user or `accessdaemon`. The access mode `perf_event` is current untested.
- `accessdaemon_path`: Folder of the accessDaemon `likwid-accessD` (like `/usr/local/sbin`)
- `liblikwid_path`: Location of `liblikwid.so` including file name like `/usr/local/lib/liblikwid.so`
- `lockfile_path`: Location of LIKWID's lock file if multiple tools should access the hardware counters. Default `/var/run/likwid.lock`
### Available metric scopes
### Available metric types
Hardware performance counters are scattered all over the system nowadays. A counter coveres a specific part of the system. While there are hardware thread specific counter for CPU cycles, instructions and so on, some others are specific for a whole CPU socket/package. To address that, the collector provides the specification of a 'scope' for each metric.
Hardware performance counters are scattered all over the system nowadays. A counter coveres a specific part of the system. While there are hardware thread specific counter for CPU cycles, instructions and so on, some others are specific for a whole CPU socket/package. To address that, the LikwidCollector provides the specification of a `type` for each metric.
- `cpu` : One metric per CPU hardware thread with the tags `"type" : "cpu"` and `"type-id" : "$cpu_id"`
- `hwthread` : One metric per CPU hardware thread with the tags `"type" : "hwthread"` and `"type-id" : "$hwthread_id"`
- `socket` : One metric per CPU socket/package with the tags `"type" : "socket"` and `"type-id" : "$socket_id"`
**Note:** You cannot specify `socket` scope for a metric that is measured at `cpu` scope, so some kind of expert knowledge or lookup work in the [Likwid Wiki](https://github.com/RRZE-HPC/likwid/wiki) is required. Get the scope of each counter from the *Architecture* pages and as soon as one counter in a metric is socket-specific, the whole metric is socket-specific.
**Note:** You cannot specify `socket` type for a metric that is measured at `hwthread` type, so some kind of expert knowledge or lookup work in the [Likwid Wiki](https://github.com/RRZE-HPC/likwid/wiki) is required. Get the type of each counter from the *Architecture* pages and as soon as one counter in a metric is socket-specific, the whole metric is socket-specific.
As a guideline:
- All counters `FIXCx`, `PMCy` and `TMAz` have the scope `cpu`
- All counters names containing `BOX` have the scope `socket`
- All `PWRx` counters have scope `socket`, except `"PWR1" : "RAPL_CORE_ENERGY"` has `cpu` scope
- All `DFCx` counters have scope `socket`
- All counters `FIXCx`, `PMCy` and `TMAz` have the type `hwthread`
- All counters names containing `BOX` have the type `socket`
- All `PWRx` counters have type `socket`, except `"PWR1" : "RAPL_CORE_ENERGY"` has `hwthread` type
- All `DFCx` counters have type `socket`
### Help with the configuration
The configuration for the `likwid` collector is quite complicated. Most users don't use LIKWID with the event:counter notation but rely on the performance groups defined by the LIKWID team for each architecture. In order to help with the `likwid` collector configuration, we included a script `scripts/likwid_perfgroup_to_cc_config.py` that creates the configuration of an `eventset` from a performance group (using a LIKWID installation in `$PATH`):
```
$ likwid-perfctr -i
[...]
short name: ICX
[...]
$ likwid-perfctr -a
[...]
MEM_DP
MEM
FLOPS_SP
CLOCK
[...]
$ scripts/likwid_perfgroup_to_cc_config.py ICX MEM_DP
{
"events": {
"FIXC0": "INSTR_RETIRED_ANY",
"FIXC1": "CPU_CLK_UNHALTED_CORE",
"..." : "..."
},
"metrics" : [
{
"calc": "time",
"name": "Runtime (RDTSC) [s]",
"publish": true,
"unit": "seconds"
"type": "hwthread"
},
{
"..." : "..."
}
]
}
```
You can copy this JSON and add it to the `eventsets` list. If you specify multiple event sets, you can add globally derived metrics in the extra `global_metrics` section with the metric names as variables.
### Mixed usage between daemon and users
LIKWID checks the file `/var/run/likwid.lock` before performing any interfering operations. Who is allowed to access the counters is determined by the owner of the file. If it does not exist, it is created for the current user. So, if you want to temporarly allow counter access to a user (e.g. in a job):
Before (SLURM prolog, ...)
```bash
chown $JOBUSER /var/run/likwid.lock
```
After (SLURM epilog, ...)
```bash
chown $CCUSER /var/run/likwid.lock
```
### `invalid_to_zero` option
In some cases LIKWID returns `0.0` for some events that are further used in processing and maybe used as divisor in a calculation. After evaluation of a metric, the result might be `NaN` or `+-Inf`. These resulting metrics are commonly not created and forwarded to the router because the [InfluxDB line protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/#float) does not support these special floating-point values. If you want to have them sent, this option forces these metric values to be `0.0` instead.
One might think this does not happen often but often used metrics in the world of performance engineering like Instructions-per-Cycle (IPC) or more frequently the actual CPU clock are derived with events like `CPU_CLK_UNHALTED_CORE` (Intel) which do not increment in halted state (as the name implies). In there are different power management systems in a chip which can cause a hardware thread to go in such a state. Moreover, if no cycles are executed by the core, also many other events are not incremented as well (like `INSTR_RETIRED_ANY` for retired instructions and part of IPC).
### `lockfile_path` option
LIKWID can be configured with a lock file with which the access to the performance monitoring registers can be disabled (only the owner of the lock file is allowed to access the registers). When the `lockfile_path` option is set, the collector subscribes to changes to this file to stop monitoring if the owner of the lock file changes. This feature is useful when users should be able to perform own hardware performance counter measurements through LIKWID or any other tool.
### `send_*_total values` option
- `send_core_total_values`: Metrics, which are usually collected on a per hardware thread basis, are additionally summed up per CPU core.
- `send_socket_total_values` Metrics, which are usually collected on a per hardware thread basis, are additionally summed up per CPU socket.
- `send_node_total_values` Metrics, which are usually collected on a per hardware thread basis, are additionally summed up per node.
### Example configuration
#### AMD Zen3
```json
"likwid": {
"force_overwrite" : false,
"nan_to_zero" : false,
"invalid_to_zero" : false,
"eventsets": [
{
"events": {
@@ -52,25 +167,28 @@ As a guideline:
{
"name": "ipc",
"calc": "PMC0/PMC1",
"scope": "cpu",
"type": "hwthread",
"publish": true
},
{
"name": "flops_any",
"calc": "0.000001*PMC2/time",
"scope": "cpu",
"unit": "MFlops/s",
"type": "hwthread",
"publish": true
},
{
"name": "clock_mhz",
"name": "clock",
"calc": "0.000001*(FIXC1/FIXC2)/inverseClock",
"scope": "cpu",
"type": "hwthread",
"unit": "MHz",
"publish": true
},
{
"name": "mem1",
"calc": "0.000001*(DFC0+DFC1+DFC2+DFC3)*64.0/time",
"scope": "socket",
"unit": "Mbyte/s",
"type": "socket",
"publish": false
}
]
@@ -88,19 +206,22 @@ As a guideline:
{
"name": "pwr_core",
"calc": "PWR0/time",
"scope": "socket",
"unit": "Watt"
"type": "socket",
"publish": true
},
{
"name": "pwr_pkg",
"calc": "PWR1/time",
"scope": "socket",
"type": "socket",
"unit": "Watt"
"publish": true
},
{
"name": "mem2",
"calc": "0.000001*(DFC0+DFC1+DFC2+DFC3)*64.0/time",
"scope": "socket",
"unit": "Mbyte/s",
"type": "socket",
"publish": false
}
]
@@ -110,7 +231,8 @@ As a guideline:
{
"name": "mem_bw",
"calc": "mem1+mem2",
"scope": "socket",
"type": "socket",
"unit": "Mbyte/s",
"publish": true
}
]
@@ -119,9 +241,10 @@ As a guideline:
### How to get the eventsets and metrics from LIKWID
The `likwid` collector reads hardware performance counters at a **cpu** and **socket** level. The configuration looks quite complicated but it is basically copy&paste from [LIKWID's performance groups](https://github.com/RRZE-HPC/likwid/tree/master/groups). The collector made multiple iterations and tried to use the performance groups but it lacked flexibility. The current way of configuration provides most flexibility.
The `likwid` collector reads hardware performance counters at a **hwthread** and **socket** level. The configuration looks quite complicated but it is basically copy&paste from [LIKWID's performance groups](https://github.com/RRZE-HPC/likwid/tree/master/groups). The collector made multiple iterations and tried to use the performance groups but it lacked flexibility. The current way of configuration provides most flexibility.
The logic is as following: There are multiple eventsets, each consisting of a list of counters+events and a list of metrics. If you compare a common performance group with the example setting above, there is not much difference:
```
EVENTSET -> "events": {
FIXC1 ACTUAL_CPU_CLOCK -> "FIXC1": "ACTUAL_CPU_CLOCK",
@@ -140,9 +263,10 @@ METRICS -> "metrics": [
IPC PMC0/PMC1 -> {
-> "name" : "IPC",
-> "calc" : "PMC0/PMC1",
-> "scope": "cpu",
-> "type": "hwthread",
-> "publish": true
-> }
-> ]
```
The script `scripts/likwid_perfgroup_to_cc_config.py` might help you.

View File

@@ -3,23 +3,21 @@ package collectors
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
//
// LoadavgCollector collects:
// * load average of last 1, 5 & 15 minutes
// * number of processes currently runnable
// * total number of processes in system
//
// See: https://www.kernel.org/doc/html/latest/filesystems/proc.html
//
const LOADAVGFILE = "/proc/loadavg"
type LoadavgCollector struct {
@@ -36,6 +34,7 @@ type LoadavgCollector struct {
func (m *LoadavgCollector) Init(config json.RawMessage) error {
m.name = "LoadavgCollector"
m.parallel = true
m.setup()
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
@@ -67,17 +66,15 @@ func (m *LoadavgCollector) Init(config json.RawMessage) error {
return nil
}
func (m *LoadavgCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *LoadavgCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
buffer, err := ioutil.ReadFile(LOADAVGFILE)
buffer, err := os.ReadFile(LOADAVGFILE)
if err != nil {
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to read file '%s': %v", LOADAVGFILE, err))
}
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to read file '%s': %v", LOADAVGFILE, err))
return
}
now := time.Now()
@@ -95,7 +92,7 @@ func (m *LoadavgCollector) Read(interval time.Duration, output chan lp.CCMetric)
if m.load_skips[i] {
continue
}
y, err := lp.New(name, m.tags, m.meta, map[string]interface{}{"value": x}, now)
y, err := lp.NewMessage(name, m.tags, m.meta, map[string]interface{}{"value": x}, now)
if err == nil {
output <- y
}
@@ -114,7 +111,7 @@ func (m *LoadavgCollector) Read(interval time.Duration, output chan lp.CCMetric)
if m.proc_skips[i] {
continue
}
y, err := lp.New(name, m.tags, m.meta, map[string]interface{}{"value": x}, now)
y, err := lp.NewMessage(name, m.tags, m.meta, map[string]interface{}{"value": x}, now)
if err == nil {
output <- y
}

View File

@@ -10,8 +10,8 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const LUSTRE_SYSFS = `/sys/fs/lustre`
@@ -19,23 +19,41 @@ const LCTL_CMD = `lctl`
const LCTL_OPTION = `get_param`
type LustreCollectorConfig struct {
LCtlCommand string `json:"lctl_command"`
ExcludeMetrics []string `json:"exclude_metrics"`
SendAllMetrics bool `json:"send_all_metrics"`
LCtlCommand string `json:"lctl_command,omitempty"`
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
Sudo bool `json:"use_sudo,omitempty"`
SendAbsoluteValues bool `json:"send_abs_values,omitempty"`
SendDerivedValues bool `json:"send_derived_values,omitempty"`
SendDiffValues bool `json:"send_diff_values,omitempty"`
}
type LustreMetricDefinition struct {
name string
lineprefix string
lineoffset int
unit string
calc string
}
type LustreCollector struct {
metricCollector
tags map[string]string
matches map[string]map[string]int
stats map[string]map[string]int64
config LustreCollectorConfig
lctl string
tags map[string]string
config LustreCollectorConfig
lctl string
sudoCmd string
lastTimestamp time.Time // Store time stamp of last tick to derive bandwidths
definitions []LustreMetricDefinition // Combined list without excluded metrics
stats map[string]map[string]int64 // Data for last value per device and metric
}
func (m *LustreCollector) getDeviceDataCommand(device string) []string {
var command *exec.Cmd
statsfile := fmt.Sprintf("llite.%s.stats", device)
command := exec.Command(m.lctl, LCTL_OPTION, statsfile)
if m.config.Sudo {
command = exec.Command(m.sudoCmd, m.lctl, LCTL_OPTION, statsfile)
} else {
command = exec.Command(m.lctl, LCTL_OPTION, statsfile)
}
command.Wait()
stdout, _ := command.Output()
return strings.Split(string(stdout), "\n")
@@ -68,21 +86,209 @@ func (m *LustreCollector) getDevices() []string {
return devices
}
func getMetricData(lines []string, prefix string, offset int) (int64, error) {
for _, line := range lines {
if strings.HasPrefix(line, prefix) {
lf := strings.Fields(line)
return strconv.ParseInt(lf[offset], 0, 64)
}
}
return 0, errors.New("no such line in data")
}
// //Version reading the stats data of a device from sysfs
// func (m *LustreCollector) getDeviceDataSysfs(device string) []string {
// llitedir := filepath.Join(LUSTRE_SYSFS, "llite")
// devdir := filepath.Join(llitedir, device)
// statsfile := filepath.Join(devdir, "stats")
// buffer, err := ioutil.ReadFile(statsfile)
// buffer, err := os.ReadFile(statsfile)
// if err != nil {
// return make([]string, 0)
// }
// return strings.Split(string(buffer), "\n")
// }
var LustreAbsMetrics = []LustreMetricDefinition{
{
name: "lustre_read_requests",
lineprefix: "read_bytes",
lineoffset: 1,
unit: "requests",
calc: "none",
},
{
name: "lustre_write_requests",
lineprefix: "write_bytes",
lineoffset: 1,
unit: "requests",
calc: "none",
},
{
name: "lustre_read_bytes",
lineprefix: "read_bytes",
lineoffset: 6,
unit: "bytes",
calc: "none",
},
{
name: "lustre_write_bytes",
lineprefix: "write_bytes",
lineoffset: 6,
unit: "bytes",
calc: "none",
},
{
name: "lustre_open",
lineprefix: "open",
lineoffset: 1,
unit: "",
calc: "none",
},
{
name: "lustre_close",
lineprefix: "close",
lineoffset: 1,
unit: "",
calc: "none",
},
{
name: "lustre_setattr",
lineprefix: "setattr",
lineoffset: 1,
unit: "",
calc: "none",
},
{
name: "lustre_getattr",
lineprefix: "getattr",
lineoffset: 1,
unit: "",
calc: "none",
},
{
name: "lustre_statfs",
lineprefix: "statfs",
lineoffset: 1,
unit: "",
calc: "none",
},
{
name: "lustre_inode_permission",
lineprefix: "inode_permission",
lineoffset: 1,
unit: "",
calc: "none",
},
}
var LustreDiffMetrics = []LustreMetricDefinition{
{
name: "lustre_read_requests_diff",
lineprefix: "read_bytes",
lineoffset: 1,
unit: "requests",
calc: "difference",
},
{
name: "lustre_write_requests_diff",
lineprefix: "write_bytes",
lineoffset: 1,
unit: "requests",
calc: "difference",
},
{
name: "lustre_read_bytes_diff",
lineprefix: "read_bytes",
lineoffset: 6,
unit: "bytes",
calc: "difference",
},
{
name: "lustre_write_bytes_diff",
lineprefix: "write_bytes",
lineoffset: 6,
unit: "bytes",
calc: "difference",
},
{
name: "lustre_open_diff",
lineprefix: "open",
lineoffset: 1,
unit: "",
calc: "difference",
},
{
name: "lustre_close_diff",
lineprefix: "close",
lineoffset: 1,
unit: "",
calc: "difference",
},
{
name: "lustre_setattr_diff",
lineprefix: "setattr",
lineoffset: 1,
unit: "",
calc: "difference",
},
{
name: "lustre_getattr_diff",
lineprefix: "getattr",
lineoffset: 1,
unit: "",
calc: "difference",
},
{
name: "lustre_statfs_diff",
lineprefix: "statfs",
lineoffset: 1,
unit: "",
calc: "difference",
},
{
name: "lustre_inode_permission_diff",
lineprefix: "inode_permission",
lineoffset: 1,
unit: "",
calc: "difference",
},
}
var LustreDeriveMetrics = []LustreMetricDefinition{
{
name: "lustre_read_requests_rate",
lineprefix: "read_bytes",
lineoffset: 1,
unit: "requests/sec",
calc: "derivative",
},
{
name: "lustre_write_requests_rate",
lineprefix: "write_bytes",
lineoffset: 1,
unit: "requests/sec",
calc: "derivative",
},
{
name: "lustre_read_bw",
lineprefix: "read_bytes",
lineoffset: 6,
unit: "bytes/sec",
calc: "derivative",
},
{
name: "lustre_write_bw",
lineprefix: "write_bytes",
lineoffset: 6,
unit: "bytes/sec",
calc: "derivative",
},
}
func (m *LustreCollector) Init(config json.RawMessage) error {
var err error
m.name = "LustreCollector"
m.parallel = true
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
@@ -92,42 +298,28 @@ func (m *LustreCollector) Init(config json.RawMessage) error {
m.setup()
m.tags = map[string]string{"type": "node"}
m.meta = map[string]string{"source": m.name, "group": "Lustre"}
defmatches := map[string]map[string]int{
"read_bytes": {"lustre_read_bytes": 6, "lustre_read_requests": 1},
"write_bytes": {"lustre_write_bytes": 6, "lustre_write_requests": 1},
"open": {"lustre_open": 1},
"close": {"lustre_close": 1},
"setattr": {"lustre_setattr": 1},
"getattr": {"lustre_getattr": 1},
"statfs": {"lustre_statfs": 1},
"inode_permission": {"lustre_inode_permission": 1}}
// Lustre file system statistics can only be queried by user root
user, err := user.Current()
if err != nil {
cclog.ComponentError(m.name, "Failed to get current user:", err.Error())
return err
}
if user.Uid != "0" {
cclog.ComponentError(m.name, "Lustre file system statistics can only be queried by user root:", err.Error())
return err
// or with password-less sudo
if !m.config.Sudo {
user, err := user.Current()
if err != nil {
cclog.ComponentError(m.name, "Failed to get current user:", err.Error())
return err
}
if user.Uid != "0" {
cclog.ComponentError(m.name, "Lustre file system statistics can only be queried by user root")
return err
}
} else {
p, err := exec.LookPath("sudo")
if err != nil {
cclog.ComponentError(m.name, "Cannot find 'sudo'")
return err
}
m.sudoCmd = p
}
m.matches = make(map[string]map[string]int)
for lineprefix, names := range defmatches {
for metricname, offset := range names {
_, skip := stringArrayContains(m.config.ExcludeMetrics, metricname)
if skip {
continue
}
if _, prefixExist := m.matches[lineprefix]; !prefixExist {
m.matches[lineprefix] = make(map[string]int)
}
if _, metricExist := m.matches[lineprefix][metricname]; !metricExist {
m.matches[lineprefix][metricname] = offset
}
}
}
p, err := exec.LookPath(m.config.LCtlCommand)
if err != nil {
p, err = exec.LookPath(LCTL_CMD)
@@ -137,75 +329,101 @@ func (m *LustreCollector) Init(config json.RawMessage) error {
}
m.lctl = p
m.definitions = []LustreMetricDefinition{}
if m.config.SendAbsoluteValues {
for _, def := range LustreAbsMetrics {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, def.name); !skip {
m.definitions = append(m.definitions, def)
}
}
}
if m.config.SendDiffValues {
for _, def := range LustreDiffMetrics {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, def.name); !skip {
m.definitions = append(m.definitions, def)
}
}
}
if m.config.SendDerivedValues {
for _, def := range LustreDeriveMetrics {
if _, skip := stringArrayContains(m.config.ExcludeMetrics, def.name); !skip {
m.definitions = append(m.definitions, def)
}
}
}
if len(m.definitions) == 0 {
return errors.New("no metrics to collect")
}
devices := m.getDevices()
if len(devices) == 0 {
return errors.New("no metrics to collect")
return errors.New("no Lustre devices found")
}
m.stats = make(map[string]map[string]int64)
for _, d := range devices {
m.stats[d] = make(map[string]int64)
for _, names := range m.matches {
for metricname := range names {
m.stats[d][metricname] = 0
data := m.getDeviceDataCommand(d)
for _, def := range m.definitions {
x, err := getMetricData(data, def.lineprefix, def.lineoffset)
if err == nil {
m.stats[d][def.name] = x
} else {
m.stats[d][def.name] = 0
}
}
}
m.lastTimestamp = time.Now()
m.init = true
return nil
}
func (m *LustreCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *LustreCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
now := time.Now()
tdiff := now.Sub(m.lastTimestamp)
for device, devData := range m.stats {
stats := m.getDeviceDataCommand(device)
processed := []string{}
for _, line := range stats {
lf := strings.Fields(line)
if len(lf) > 1 {
if fields, ok := m.matches[lf[0]]; ok {
for name, idx := range fields {
x, err := strconv.ParseInt(lf[idx], 0, 64)
if err != nil {
continue
}
value := x - devData[name]
devData[name] = x
if value < 0 {
value = 0
}
y, err := lp.New(name, m.tags, m.meta, map[string]interface{}{"value": value}, time.Now())
if err == nil {
y.AddTag("device", device)
if strings.Contains(name, "byte") {
y.AddMeta("unit", "Byte")
}
output <- y
if m.config.SendAllMetrics {
processed = append(processed, name)
}
}
}
}
data := m.getDeviceDataCommand(device)
for _, def := range m.definitions {
var use_x int64
var err error
var y lp.CCMessage
x, err := getMetricData(data, def.lineprefix, def.lineoffset)
if err == nil {
use_x = x
} else {
use_x = devData[def.name]
}
}
if m.config.SendAllMetrics {
for name := range devData {
if _, done := stringArrayContains(processed, name); !done {
y, err := lp.New(name, m.tags, m.meta, map[string]interface{}{"value": 0}, time.Now())
if err == nil {
y.AddTag("device", device)
if strings.Contains(name, "byte") {
y.AddMeta("unit", "Byte")
}
output <- y
}
var value interface{}
switch def.calc {
case "none":
value = use_x
y, err = lp.NewMessage(def.name, m.tags, m.meta, map[string]interface{}{"value": value}, time.Now())
case "difference":
value = use_x - devData[def.name]
if value.(int64) < 0 {
value = 0
}
y, err = lp.NewMessage(def.name, m.tags, m.meta, map[string]interface{}{"value": value}, time.Now())
case "derivative":
value = float64(use_x-devData[def.name]) / tdiff.Seconds()
if value.(float64) < 0 {
value = 0
}
y, err = lp.NewMessage(def.name, m.tags, m.meta, map[string]interface{}{"value": value}, time.Now())
}
if err == nil {
y.AddTag("device", device)
if len(def.unit) > 0 {
y.AddMeta("unit", def.unit)
}
output <- y
}
devData[def.name] = use_x
}
}
m.lastTimestamp = now
}
func (m *LustreCollector) Close() {

View File

@@ -3,27 +3,44 @@
```json
"lustrestat": {
"procfiles" : [
"/proc/fs/lustre/llite/lnec-XXXXXX/stats"
],
"lctl_command": "/path/to/lctl",
"exclude_metrics": [
"setattr",
"getattr"
]
],
"send_abs_values" : true,
"send_derived_values" : true,
"send_diff_values": true,
"use_sudo": false
}
```
The `lustrestat` collector reads from the procfs stat files for Lustre like `/proc/fs/lustre/llite/lnec-XXXXXX/stats`.
The `lustrestat` collector uses the `lctl` application with the `get_param` option to get all `llite` metrics (Lustre client). The `llite` metrics are only available for root users. If password-less sudo is configured, you can enable `sudo` in the configuration.
Metrics:
* `read_bytes`
* `read_requests`
* `write_bytes`
* `write_requests`
* `open`
* `close`
* `getattr`
* `setattr`
* `statfs`
* `inode_permission`
* `lustre_read_bytes` (unit `bytes`)
* `lustre_read_requests` (unit `requests`)
* `lustre_write_bytes` (unit `bytes`)
* `lustre_write_requests` (unit `requests`)
* `lustre_open`
* `lustre_close`
* `lustre_getattr`
* `lustre_setattr`
* `lustre_statfs`
* `lustre_inode_permission`
* `lustre_read_bw` (if `send_derived_values == true`, unit `bytes/sec`)
* `lustre_write_bw` (if `send_derived_values == true`, unit `bytes/sec`)
* `lustre_read_requests_rate` (if `send_derived_values == true`, unit `requests/sec`)
* `lustre_write_requests_rate` (if `send_derived_values == true`, unit `requests/sec`)
* `lustre_read_bytes_diff` (if `send_diff_values == true`, unit `bytes`)
* `lustre_read_requests_diff` (if `send_diff_values == true`, unit `requests`)
* `lustre_write_bytes_diff` (if `send_diff_values == true`, unit `bytes`)
* `lustre_write_requests_diff` (if `send_diff_values == true`, unit `requests`)
* `lustre_open_diff` (if `send_diff_values == true`)
* `lustre_close_diff` (if `send_diff_values == true`)
* `lustre_getattr_diff` (if `send_diff_values == true`)
* `lustre_setattr_diff` (if `send_diff_values == true`)
* `lustre_statfs_diff` (if `send_diff_values == true`)
* `lustre_inode_permission_diff` (if `send_diff_values == true`)
This collector adds an `device` tag.

View File

@@ -1,46 +1,102 @@
package collectors
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"time"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const MEMSTATFILE = `/proc/meminfo`
const MEMSTATFILE = "/proc/meminfo"
const NUMA_MEMSTAT_BASE = "/sys/devices/system/node"
type MemstatCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics"`
NodeStats bool `json:"node_stats,omitempty"`
NumaStats bool `json:"numa_stats,omitempty"`
}
type MemstatCollectorNode struct {
file string
tags map[string]string
}
type MemstatCollector struct {
metricCollector
stats map[string]int64
tags map[string]string
matches map[string]string
config MemstatCollectorConfig
stats map[string]int64
tags map[string]string
matches map[string]string
config MemstatCollectorConfig
nodefiles map[int]MemstatCollectorNode
sendMemUsed bool
}
type MemstatStats struct {
value float64
unit string
}
func getStats(filename string) map[string]MemstatStats {
stats := make(map[string]MemstatStats)
file, err := os.Open(filename)
if err != nil {
cclog.Error(err.Error())
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
linefields := strings.Fields(line)
if len(linefields) == 3 {
v, err := strconv.ParseFloat(linefields[1], 64)
if err == nil {
stats[strings.Trim(linefields[0], ":")] = MemstatStats{
value: v,
unit: linefields[2],
}
}
} else if len(linefields) == 5 {
v, err := strconv.ParseFloat(linefields[3], 64)
if err == nil {
cclog.ComponentDebug("getStats", strings.Trim(linefields[2], ":"), v, linefields[4])
stats[strings.Trim(linefields[2], ":")] = MemstatStats{
value: v,
unit: linefields[4],
}
}
}
}
return stats
}
func (m *MemstatCollector) Init(config json.RawMessage) error {
var err error
m.name = "MemstatCollector"
m.parallel = true
m.config.NodeStats = true
m.config.NumaStats = false
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
return err
}
}
m.meta = map[string]string{"source": m.name, "group": "Memory", "unit": "kByte"}
m.meta = map[string]string{"source": m.name, "group": "Memory"}
m.stats = make(map[string]int64)
m.matches = make(map[string]string)
m.tags = map[string]string{"type": "node"}
matches := map[string]string{`MemTotal`: "mem_total",
matches := map[string]string{
"MemTotal": "mem_total",
"SwapTotal": "swap_total",
"SReclaimable": "mem_sreclaimable",
"Slab": "mem_slab",
@@ -48,79 +104,129 @@ func (m *MemstatCollector) Init(config json.RawMessage) error {
"Buffers": "mem_buffers",
"Cached": "mem_cached",
"MemAvailable": "mem_available",
"SwapFree": "swap_free"}
"SwapFree": "swap_free",
"MemShared": "mem_shared",
}
for k, v := range matches {
_, skip := stringArrayContains(m.config.ExcludeMetrics, k)
if !skip {
m.matches[k] = v
}
}
m.sendMemUsed = false
if _, skip := stringArrayContains(m.config.ExcludeMetrics, "mem_used"); !skip {
m.sendMemUsed = true
}
if len(m.matches) == 0 {
return errors.New("No metrics to collect")
return errors.New("no metrics to collect")
}
m.setup()
_, err = ioutil.ReadFile(string(MEMSTATFILE))
if err == nil {
m.init = true
}
return err
}
func (m *MemstatCollector) Read(interval time.Duration, output chan lp.CCMetric) {
if !m.init {
return
}
buffer, err := ioutil.ReadFile(string(MEMSTATFILE))
if err != nil {
log.Print(err)
return
}
ll := strings.Split(string(buffer), "\n")
for _, line := range ll {
ls := strings.Split(line, `:`)
if len(ls) > 1 {
lv := strings.Fields(ls[1])
m.stats[ls[0]], err = strconv.ParseInt(lv[0], 0, 64)
if m.config.NodeStats {
if stats := getStats(MEMSTATFILE); len(stats) == 0 {
return fmt.Errorf("cannot read data from file %s", MEMSTATFILE)
}
}
if _, exists := m.stats[`MemTotal`]; !exists {
err = errors.New("Parse error")
log.Print(err)
return
}
for match, name := range m.matches {
if _, exists := m.stats[match]; !exists {
err = fmt.Errorf("Parse error for %s : %s", match, name)
log.Print(err)
continue
}
y, err := lp.New(name, m.tags, m.meta, map[string]interface{}{"value": int(float64(m.stats[match]) * 1.0e-3)}, time.Now())
if m.config.NumaStats {
globPattern := filepath.Join(NUMA_MEMSTAT_BASE, "node[0-9]*", "meminfo")
regex := regexp.MustCompile(filepath.Join(NUMA_MEMSTAT_BASE, "node(\\d+)", "meminfo"))
files, err := filepath.Glob(globPattern)
if err == nil {
output <- y
}
}
if _, free := m.stats[`MemFree`]; free {
if _, buffers := m.stats[`Buffers`]; buffers {
if _, cached := m.stats[`Cached`]; cached {
memUsed := m.stats[`MemTotal`] - (m.stats[`MemFree`] + m.stats[`Buffers`] + m.stats[`Cached`])
_, skip := stringArrayContains(m.config.ExcludeMetrics, "mem_used")
y, err := lp.New("mem_used", m.tags, m.meta, map[string]interface{}{"value": int(float64(memUsed) * 1.0e-3)}, time.Now())
if err == nil && !skip {
output <- y
m.nodefiles = make(map[int]MemstatCollectorNode)
for _, f := range files {
if stats := getStats(f); len(stats) == 0 {
return fmt.Errorf("cannot read data from file %s", f)
}
rematch := regex.FindStringSubmatch(f)
if len(rematch) == 2 {
id, err := strconv.Atoi(rematch[1])
if err == nil {
f := MemstatCollectorNode{
file: f,
tags: map[string]string{
"type": "memoryDomain",
"type-id": fmt.Sprintf("%d", id),
},
}
m.nodefiles[id] = f
}
}
}
}
}
if _, found := m.stats[`MemShared`]; found {
_, skip := stringArrayContains(m.config.ExcludeMetrics, "mem_shared")
y, err := lp.New("mem_shared", m.tags, m.meta, map[string]interface{}{"value": int(float64(m.stats[`MemShared`]) * 1.0e-3)}, time.Now())
if err == nil && !skip {
output <- y
m.init = true
return err
}
func (m *MemstatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
sendStats := func(stats map[string]MemstatStats, tags map[string]string) {
for match, name := range m.matches {
var value float64 = 0
var unit string = ""
if v, ok := stats[match]; ok {
value = v.value
if len(v.unit) > 0 {
unit = v.unit
}
}
y, err := lp.NewMessage(name, tags, m.meta, map[string]interface{}{"value": value}, time.Now())
if err == nil {
if len(unit) > 0 {
y.AddMeta("unit", unit)
}
output <- y
}
}
if m.sendMemUsed {
memUsed := 0.0
unit := ""
if totalVal, total := stats["MemTotal"]; total {
if freeVal, free := stats["MemFree"]; free {
memUsed = totalVal.value - freeVal.value
if len(totalVal.unit) > 0 {
unit = totalVal.unit
} else if len(freeVal.unit) > 0 {
unit = freeVal.unit
}
if bufVal, buffers := stats["Buffers"]; buffers {
memUsed -= bufVal.value
if len(bufVal.unit) > 0 && len(unit) == 0 {
unit = bufVal.unit
}
if cacheVal, cached := stats["Cached"]; cached {
memUsed -= cacheVal.value
if len(cacheVal.unit) > 0 && len(unit) == 0 {
unit = cacheVal.unit
}
}
}
}
}
y, err := lp.NewMessage("mem_used", tags, m.meta, map[string]interface{}{"value": memUsed}, time.Now())
if err == nil {
if len(unit) > 0 {
y.AddMeta("unit", unit)
}
output <- y
}
}
}
if m.config.NodeStats {
nodestats := getStats(MEMSTATFILE)
sendStats(nodestats, m.tags)
}
if m.config.NumaStats {
for _, nodeConf := range m.nodefiles {
stats := getStats(nodeConf.file)
sendStats(stats, nodeConf.tags)
}
}
}

View File

@@ -3,39 +3,43 @@ package collectors
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"strconv"
"strings"
"time"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
type MetricCollector interface {
Name() string
Init(config json.RawMessage) error
Initialized() bool
Read(duration time.Duration, output chan lp.CCMetric)
Close()
Name() string // Name of the metric collector
Init(config json.RawMessage) error // Initialize metric collector
Initialized() bool // Is metric collector initialized?
Parallel() bool
Read(duration time.Duration, output chan lp.CCMessage) // Read metrics from metric collector
Close() // Close / finish metric collector
}
type metricCollector struct {
name string
init bool
meta map[string]string
name string // name of the metric
init bool // is metric collector initialized?
parallel bool // can the metric collector be executed in parallel with others
meta map[string]string // static meta data tags
}
// Name() returns the name of the metric collector
// Name returns the name of the metric collector
func (c *metricCollector) Name() string {
return c.name
}
// Name returns the name of the metric collector
func (c *metricCollector) Parallel() bool {
return c.parallel
}
// Setup is for future use
func (c *metricCollector) setup() error {
return nil
}
// Initialized() indicates whether the metric collector has been initialized.
// Initialized indicates whether the metric collector has been initialized
func (c *metricCollector) Initialized() bool {
return c.init
}
@@ -64,63 +68,13 @@ func stringArrayContains(array []string, str string) (int, bool) {
return -1, false
}
func SocketList() []int {
buffer, err := ioutil.ReadFile("/proc/cpuinfo")
if err != nil {
log.Print(err)
return nil
}
ll := strings.Split(string(buffer), "\n")
var packs []int
for _, line := range ll {
if strings.HasPrefix(line, "physical id") {
lv := strings.Fields(line)
id, err := strconv.ParseInt(lv[3], 10, 32)
if err != nil {
log.Print(err)
return packs
}
_, found := intArrayContains(packs, int(id))
if !found {
packs = append(packs, int(id))
}
}
}
return packs
}
func CpuList() []int {
buffer, err := ioutil.ReadFile("/proc/cpuinfo")
if err != nil {
log.Print(err)
return nil
}
ll := strings.Split(string(buffer), "\n")
var cpulist []int
for _, line := range ll {
if strings.HasPrefix(line, "processor") {
lv := strings.Fields(line)
id, err := strconv.ParseInt(lv[2], 10, 32)
if err != nil {
log.Print(err)
return cpulist
}
_, found := intArrayContains(cpulist, int(id))
if !found {
cpulist = append(cpulist, int(id))
}
}
}
return cpulist
}
// RemoveFromStringList removes the string r from the array of strings s
// If r is not contained in the array an error is returned
func RemoveFromStringList(s []string, r string) ([]string, error) {
for i, item := range s {
if r == item {
for i := range s {
if r == s[i] {
return append(s[:i], s[i+1:]...), nil
}
}
return s, fmt.Errorf("No such string in list")
return s, fmt.Errorf("no such string in list")
}

View File

@@ -9,42 +9,84 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const NETSTATFILE = `/proc/net/dev`
const NETSTATFILE = "/proc/net/dev"
type NetstatCollectorConfig struct {
IncludeDevices []string `json:"include_devices"`
IncludeDevices []string `json:"include_devices"`
SendAbsoluteValues bool `json:"send_abs_values"`
SendDerivedValues bool `json:"send_derived_values"`
InterfaceAliases map[string][]string `json:"interface_aliases,omitempty"`
}
type NetstatCollectorMetric struct {
index int
lastValue float64
name string
index int
tags map[string]string
meta map[string]string
meta_rates map[string]string
lastValue int64
}
type NetstatCollector struct {
metricCollector
config NetstatCollectorConfig
matches map[string]map[string]NetstatCollectorMetric
devtags map[string]map[string]string
lastTimestamp time.Time
config NetstatCollectorConfig
aliasToCanonical map[string]string
matches map[string][]NetstatCollectorMetric
lastTimestamp time.Time
}
func (m *NetstatCollector) buildAliasMapping() {
m.aliasToCanonical = make(map[string]string)
for canon, aliases := range m.config.InterfaceAliases {
for _, alias := range aliases {
m.aliasToCanonical[alias] = canon
}
}
}
func getCanonicalName(raw string, aliasToCanonical map[string]string) string {
if canon, ok := aliasToCanonical[raw]; ok {
return canon
}
return raw
}
func (m *NetstatCollector) Init(config json.RawMessage) error {
m.name = "NetstatCollector"
m.parallel = true
m.setup()
m.lastTimestamp = time.Now()
m.meta = map[string]string{"source": m.name, "group": "Network"}
m.devtags = make(map[string]map[string]string)
nameIndexMap := map[string]int{
"net_bytes_in": 1,
"net_pkts_in": 2,
"net_bytes_out": 9,
"net_pkts_out": 10,
}
m.matches = make(map[string]map[string]NetstatCollectorMetric)
const (
fieldInterface = iota
fieldReceiveBytes
fieldReceivePackets
fieldReceiveErrs
fieldReceiveDrop
fieldReceiveFifo
fieldReceiveFrame
fieldReceiveCompressed
fieldReceiveMulticast
fieldTransmitBytes
fieldTransmitPackets
fieldTransmitErrs
fieldTransmitDrop
fieldTransmitFifo
fieldTransmitColls
fieldTransmitCarrier
fieldTransmitCompressed
)
m.matches = make(map[string][]NetstatCollectorMetric)
// Set default configuration,
m.config.SendAbsoluteValues = true
m.config.SendDerivedValues = false
// Read configuration file, allow overwriting default config
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
if err != nil {
@@ -52,7 +94,11 @@ func (m *NetstatCollector) Init(config json.RawMessage) error {
return err
}
}
file, err := os.Open(string(NETSTATFILE))
m.buildAliasMapping()
// Check access to net statistic file
file, err := os.Open(NETSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return err
@@ -62,77 +108,133 @@ func (m *NetstatCollector) Init(config json.RawMessage) error {
scanner := bufio.NewScanner(file)
for scanner.Scan() {
l := scanner.Text()
// Skip lines with no net device entry
if !strings.Contains(l, ":") {
continue
}
// Split line into fields
f := strings.Fields(l)
dev := strings.Trim(f[0], ": ")
if _, ok := stringArrayContains(m.config.IncludeDevices, dev); ok {
m.matches[dev] = make(map[string]NetstatCollectorMetric)
for name, idx := range nameIndexMap {
m.matches[dev][name] = NetstatCollectorMetric{
index: idx,
lastValue: 0,
}
// Get raw and canonical names
raw := strings.Trim(f[0], ": ")
canonical := getCanonicalName(raw, m.aliasToCanonical)
// Check if device is a included device
if _, ok := stringArrayContains(m.config.IncludeDevices, canonical); ok {
// Tag will contain original device name (raw).
tags := map[string]string{"stype": "network", "stype-id": raw, "type": "node"}
meta_unit_byte := map[string]string{"source": m.name, "group": "Network", "unit": "bytes"}
meta_unit_byte_per_sec := map[string]string{"source": m.name, "group": "Network", "unit": "bytes/sec"}
meta_unit_pkts := map[string]string{"source": m.name, "group": "Network", "unit": "packets"}
meta_unit_pkts_per_sec := map[string]string{"source": m.name, "group": "Network", "unit": "packets/sec"}
m.matches[canonical] = []NetstatCollectorMetric{
{
name: "net_bytes_in",
index: fieldReceiveBytes,
lastValue: -1,
tags: tags,
meta: meta_unit_byte,
meta_rates: meta_unit_byte_per_sec,
},
{
name: "net_pkts_in",
index: fieldReceivePackets,
lastValue: -1,
tags: tags,
meta: meta_unit_pkts,
meta_rates: meta_unit_pkts_per_sec,
},
{
name: "net_bytes_out",
index: fieldTransmitBytes,
lastValue: -1,
tags: tags,
meta: meta_unit_byte,
meta_rates: meta_unit_byte_per_sec,
},
{
name: "net_pkts_out",
index: fieldTransmitPackets,
lastValue: -1,
tags: tags,
meta: meta_unit_pkts,
meta_rates: meta_unit_pkts_per_sec,
},
}
m.devtags[dev] = map[string]string{"device": dev, "type": "node"}
}
}
if len(m.devtags) == 0 {
if len(m.matches) == 0 {
return errors.New("no devices to collector metrics found")
}
m.init = true
return nil
}
func (m *NetstatCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *NetstatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
// Current time stamp
now := time.Now()
file, err := os.Open(string(NETSTATFILE))
// time difference to last time stamp
timeDiff := now.Sub(m.lastTimestamp).Seconds()
// Save current timestamp
m.lastTimestamp = now
file, err := os.Open(NETSTATFILE)
if err != nil {
cclog.ComponentError(m.name, err.Error())
return
}
defer file.Close()
tdiff := now.Sub(m.lastTimestamp)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
l := scanner.Text()
// Skip lines with no net device entry
if !strings.Contains(l, ":") {
continue
}
f := strings.Fields(l)
dev := strings.Trim(f[0], ":")
if devmetrics, ok := m.matches[dev]; ok {
for name, data := range devmetrics {
v, err := strconv.ParseFloat(f[data.index], 64)
if err == nil {
vdiff := v - data.lastValue
value := vdiff / tdiff.Seconds()
if data.lastValue == 0 {
value = 0
}
data.lastValue = v
y, err := lp.New(name, m.devtags[dev], m.meta, map[string]interface{}{"value": value}, now)
if err == nil {
switch {
case strings.Contains(name, "byte"):
y.AddMeta("unit", "bytes/sec")
case strings.Contains(name, "pkt"):
y.AddMeta("unit", "packets/sec")
}
// Split line into fields
f := strings.Fields(l)
// Get raw and canonical names
raw := strings.Trim(f[0], ":")
canonical := getCanonicalName(raw, m.aliasToCanonical)
// Check if device is a included device
if devmetrics, ok := m.matches[canonical]; ok {
for i := range devmetrics {
metric := &devmetrics[i]
// Read value
v, err := strconv.ParseInt(f[metric.index], 10, 64)
if err != nil {
continue
}
if m.config.SendAbsoluteValues {
if y, err := lp.NewMessage(metric.name, metric.tags, metric.meta, map[string]interface{}{"value": v}, now); err == nil {
output <- y
}
devmetrics[name] = data
}
if m.config.SendDerivedValues {
if metric.lastValue >= 0 {
rate := float64(v-metric.lastValue) / timeDiff
if y, err := lp.NewMessage(metric.name+"_bw", metric.tags, metric.meta_rates, map[string]interface{}{"value": rate}, now); err == nil {
output <- y
}
}
metric.lastValue = v
}
}
}
}
m.lastTimestamp = time.Now()
}
func (m *NetstatCollector) Close() {

View File

@@ -4,18 +4,28 @@
```json
"netstat": {
"include_devices": [
"eth0"
]
"eth0",
"eno1"
],
"send_abs_values": true,
"send_derived_values": true,
"interface_aliases": {
"eno1": ["eno1np0", "eno1_alt"],
"eth0": ["eth0_alias"]
}
}
```
The `netstat` collector reads data from `/proc/net/dev` and outputs a handful **node** metrics. With the `include_devices` list you can specify which network devices should be measured. **Note**: Most other collectors use an _exclude_ list instead of an include list.
The `netstat` collector reads data from `/proc/net/dev` and outputs a handful **node** metrics. With the `include_devices` list you can specify which network devices should be measured. **Note**: Most other collectors use an _exclude_ list instead of an include list. Optionally, you can define an interface_aliases mapping. For each canonical device (as listed in include_devices), you may provide an array of aliases that may be reported by the system. When an alias is detected, it is preferred for matching, while the output tag stype-id always shows the actual system-reported name.
Metrics:
* `net_bytes_in` (`unit=bytes/sec`)
* `net_bytes_out` (`unit=bytes/sec`)
* `net_pkts_in` (`unit=packets/sec`)
* `net_pkts_out` (`unit=packets/sec`)
The device name is added as tag `device`.
* `net_bytes_in` (`unit=bytes`)
* `net_bytes_out` (`unit=bytes`)
* `net_pkts_in` (`unit=packets`)
* `net_pkts_out` (`unit=packets`)
* `net_bytes_in_bw` (`unit=bytes/sec` if `send_derived_values == true`)
* `net_bytes_out_bw` (`unit=bytes/sec` if `send_derived_values == true`)
* `net_pkts_in_bw` (`unit=packets/sec` if `send_derived_values == true`)
* `net_pkts_out_bw` (`unit=packets/sec` if `send_derived_values == true`)
The device name is added as tag `stype=network,stype-id=<device>`.

View File

@@ -11,7 +11,7 @@ import (
"strings"
"time"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// First part contains the code for the general NfsCollector.
@@ -36,7 +36,7 @@ type nfsCollector struct {
}
func (m *nfsCollector) initStats() error {
cmd := exec.Command(m.config.Nfsstats, `-l`)
cmd := exec.Command(m.config.Nfsstats, `-l`, `--all`)
cmd.Wait()
buffer, err := cmd.Output()
if err == nil {
@@ -52,7 +52,7 @@ func (m *nfsCollector) initStats() error {
if err == nil {
x := m.data[name]
x.current = value
x.last = 0
x.last = value
m.data[name] = x
}
}
@@ -63,7 +63,7 @@ func (m *nfsCollector) initStats() error {
}
func (m *nfsCollector) updateStats() error {
cmd := exec.Command(m.config.Nfsstats, `-l`)
cmd := exec.Command(m.config.Nfsstats, `-l`, `--all`)
cmd.Wait()
buffer, err := cmd.Output()
if err == nil {
@@ -114,10 +114,11 @@ func (m *nfsCollector) MainInit(config json.RawMessage) error {
m.data = make(map[string]NfsCollectorData)
m.initStats()
m.init = true
m.parallel = true
return nil
}
func (m *nfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *nfsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
@@ -139,7 +140,7 @@ func (m *nfsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
continue
}
value := data.current - data.last
y, err := lp.New(fmt.Sprintf("%s_%s", prefix, name), m.tags, m.meta, map[string]interface{}{"value": value}, timestamp)
y, err := lp.NewMessage(fmt.Sprintf("%s_%s", prefix, name), m.tags, m.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
y.AddMeta("version", m.version)
output <- y

View File

@@ -0,0 +1,182 @@
package collectors
import (
"encoding/json"
"fmt"
"os"
"regexp"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// These are the fields we read from the JSON configuration
type NfsIOStatCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
ExcludeFilesystem []string `json:"exclude_filesystem,omitempty"`
UseServerAddressAsSType bool `json:"use_server_as_stype,omitempty"`
SendAbsoluteValues bool `json:"send_abs_values"`
SendDerivedValues bool `json:"send_derived_values"`
}
// This contains all variables we need during execution and the variables
// defined by metricCollector (name, init, ...)
type NfsIOStatCollector struct {
metricCollector
config NfsIOStatCollectorConfig // the configuration structure
meta map[string]string // default meta information
tags map[string]string // default tags
data map[string]map[string]int64 // data storage for difference calculation
key string // which device info should be used as subtype ID? 'server' or 'mntpoint'
lastTimestamp time.Time
}
var deviceRegex = regexp.MustCompile(`device (?P<server>[^ ]+) mounted on (?P<mntpoint>[^ ]+) with fstype nfs(?P<version>\d*) statvers=[\d\.]+`)
var bytesRegex = regexp.MustCompile(`\s+bytes:\s+(?P<nread>[^ ]+) (?P<nwrite>[^ ]+) (?P<dread>[^ ]+) (?P<dwrite>[^ ]+) (?P<nfsread>[^ ]+) (?P<nfswrite>[^ ]+) (?P<pageread>[^ ]+) (?P<pagewrite>[^ ]+)`)
func resolve_regex_fields(s string, regex *regexp.Regexp) map[string]string {
fields := make(map[string]string)
groups := regex.SubexpNames()
for _, match := range regex.FindAllStringSubmatch(s, -1) {
for groupIdx, group := range match {
if len(groups[groupIdx]) > 0 {
fields[groups[groupIdx]] = group
}
}
}
return fields
}
func (m *NfsIOStatCollector) readNfsiostats() map[string]map[string]int64 {
data := make(map[string]map[string]int64)
filename := "/proc/self/mountstats"
stats, err := os.ReadFile(filename)
if err != nil {
return data
}
lines := strings.Split(string(stats), "\n")
var current map[string]string = nil
for _, l := range lines {
// Is this a device line with mount point, remote target and NFS version?
dev := resolve_regex_fields(l, deviceRegex)
if len(dev) > 0 {
if _, ok := stringArrayContains(m.config.ExcludeFilesystem, dev[m.key]); !ok {
current = dev
if len(current["version"]) == 0 {
current["version"] = "3"
}
}
}
if len(current) > 0 {
// Byte line parsing (if found the device for it)
bytes := resolve_regex_fields(l, bytesRegex)
if len(bytes) > 0 {
data[current[m.key]] = make(map[string]int64)
for name, sval := range bytes {
if _, ok := stringArrayContains(m.config.ExcludeMetrics, name); !ok {
val, err := strconv.ParseInt(sval, 10, 64)
if err == nil {
data[current[m.key]][name] = val
}
}
}
current = nil
}
}
}
return data
}
func (m *NfsIOStatCollector) Init(config json.RawMessage) error {
var err error = nil
m.name = "NfsIOStatCollector"
m.setup()
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "NFS", "unit": "bytes"}
m.tags = map[string]string{"type": "node"}
m.config.UseServerAddressAsSType = false
// Set default configuration
m.config.SendAbsoluteValues = true
m.config.SendDerivedValues = false
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
m.key = "mntpoint"
if m.config.UseServerAddressAsSType {
m.key = "server"
}
m.data = m.readNfsiostats()
m.lastTimestamp = time.Now()
m.init = true
return err
}
func (m *NfsIOStatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
now := time.Now()
timeDiff := now.Sub(m.lastTimestamp).Seconds()
m.lastTimestamp = now
// Get the current values for all mountpoints
newdata := m.readNfsiostats()
for mntpoint, values := range newdata {
// Was the mount point already present in the last iteration
if old, ok := m.data[mntpoint]; ok {
for name, newVal := range values {
if m.config.SendAbsoluteValues {
msg, err := lp.NewMessage(fmt.Sprintf("nfsio_%s", name), m.tags, m.meta, map[string]interface{}{"value": newVal}, now)
if err == nil {
msg.AddTag("stype", "filesystem")
msg.AddTag("stype-id", mntpoint)
output <- msg
}
}
if m.config.SendDerivedValues {
rate := float64(newVal-old[name]) / timeDiff
msg, err := lp.NewMessage(fmt.Sprintf("nfsio_%s_bw", name), m.tags, m.meta, map[string]interface{}{"value": rate}, now)
if err == nil {
if strings.HasPrefix(name, "page") {
msg.AddMeta("unit", "4K_pages/s")
} else {
msg.AddMeta("unit", "bytes/sec")
}
msg.AddTag("stype", "filesystem")
msg.AddTag("stype-id", mntpoint)
output <- msg
}
}
old[name] = newVal
}
} else {
// First time we see this mount point, store all values
m.data[mntpoint] = values
}
}
// Reset entries that do not exist anymore
for mntpoint := range m.data {
found := false
for new := range newdata {
if new == mntpoint {
found = true
break
}
}
if !found {
delete(m.data, mntpoint)
}
}
}
func (m *NfsIOStatCollector) Close() {
// Unset flag
m.init = false
}

View File

@@ -0,0 +1,34 @@
## `nfsiostat` collector
```json
"nfsiostat": {
"exclude_metrics": [
"oread", "pageread"
],
"exclude_filesystems": [
"/mnt"
],
"use_server_as_stype": false,
"send_abs_values": false,
"send_derived_values": true
}
```
The `nfsiostat` collector reads data from `/proc/self/mountstats` and outputs a handful **node** metrics for each NFS filesystem. If a metric or filesystem is not required, it can be excluded from forwarding it to the sink. **Note:** When excluding metrics, you must provide the base metric name (e.g. pageread) without the nfsio_ prefix. This exclusion applies to both absolute and derived values.
Metrics:
* `nfsio_nread`: Bytes transferred by normal `read()` calls
* `nfsio_nwrite`: Bytes transferred by normal `write()` calls
* `nfsio_oread`: Bytes transferred by `read()` calls with `O_DIRECT`
* `nfsio_owrite`: Bytes transferred by `write()` calls with `O_DIRECT`
* `nfsio_pageread`: Pages transferred by `read()` calls
* `nfsio_pagewrite`: Pages transferred by `write()` calls
* `nfsio_nfsread`: Bytes transferred for reading from the server
* `nfsio_nfswrite`: Pages transferred by writing to the server
For each of these, if derived values are enabled, an additional metric is sent with the `_bw` suffix, which represents the rate:
* For normal byte metrics: `unit=bytes/sec`
* For page metrics: `unit=4K_pages/s`
The `nfsiostat` collector adds the mountpoint to the tags as `stype=filesystem,stype-id=<mountpoint>`. If the server address should be used instead of the mountpoint, use the `use_server_as_stype` config setting.

View File

@@ -10,41 +10,58 @@ import (
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
//
// Numa policy hit/miss statistics
type NUMAStatsCollectorConfig struct {
SendAbsoluteValues bool `json:"send_abs_values"`
SendDerivedValues bool `json:"send_derived_values"`
}
// Non-Uniform Memory Access (NUMA) policy hit/miss statistics
//
// numa_hit:
// A process wanted to allocate memory from this node, and succeeded.
//
// A process wanted to allocate memory from this node, and succeeded.
//
// numa_miss:
// A process wanted to allocate memory from another node,
// but ended up with memory from this node.
//
// A process wanted to allocate memory from another node,
// but ended up with memory from this node.
//
// numa_foreign:
// A process wanted to allocate on this node,
// but ended up with memory from another node.
//
// A process wanted to allocate on this node,
// but ended up with memory from another node.
//
// local_node:
// A process ran on this node's CPU,
// and got memory from this node.
//
// A process ran on this node's CPU,
// and got memory from this node.
//
// other_node:
// A process ran on a different node's CPU
// and got memory from this node.
//
// A process ran on a different node's CPU
// and got memory from this node.
//
// interleave_hit:
// Interleaving wanted to allocate from this node
// and succeeded.
//
// Interleaving wanted to allocate from this node
// and succeeded.
//
// See: https://www.kernel.org/doc/html/latest/admin-guide/numastat.html
//
type NUMAStatsCollectorTopolgy struct {
file string
tagSet map[string]string
file string
tagSet map[string]string
previousValues map[string]int64
}
type NUMAStatsCollector struct {
metricCollector
topology []NUMAStatsCollectorTopolgy
topology []NUMAStatsCollectorTopolgy
config NUMAStatsCollectorConfig
lastTimestamp time.Time
}
func (m *NUMAStatsCollector) Init(config json.RawMessage) error {
@@ -54,6 +71,7 @@ func (m *NUMAStatsCollector) Init(config json.RawMessage) error {
}
m.name = "NUMAStatsCollector"
m.parallel = true
m.setup()
m.meta = map[string]string{
"source": m.name,
@@ -76,37 +94,44 @@ func (m *NUMAStatsCollector) Init(config json.RawMessage) error {
file := filepath.Join(dir, "numastat")
m.topology = append(m.topology,
NUMAStatsCollectorTopolgy{
file: file,
tagSet: map[string]string{"memoryDomain": node},
file: file,
tagSet: map[string]string{"memoryDomain": node},
previousValues: make(map[string]int64),
})
}
// Initialized
cclog.ComponentDebug(m.name, "initialized", len(m.topology), "NUMA domains")
m.init = true
return nil
}
func (m *NUMAStatsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *NUMAStatsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
now := time.Now()
timeDiff := now.Sub(m.lastTimestamp).Seconds()
m.lastTimestamp = now
for i := range m.topology {
// Loop for all NUMA domains
t := &m.topology[i]
now := time.Now()
file, err := os.Open(t.file)
if err != nil {
cclog.ComponentError(
m.name,
fmt.Sprintf("Read(): Failed to open file '%s': %v", t.file, err))
return
continue
}
scanner := bufio.NewScanner(file)
// Read line by line
for scanner.Scan() {
split := strings.Fields(scanner.Text())
line := scanner.Text()
split := strings.Fields(line)
if len(split) != 2 {
continue
}
@@ -118,18 +143,38 @@ func (m *NUMAStatsCollector) Read(interval time.Duration, output chan lp.CCMetri
fmt.Sprintf("Read(): Failed to convert %s='%s' to int64: %v", key, split[1], err))
continue
}
y, err := lp.New(
"numastats_"+key,
t.tagSet,
m.meta,
map[string]interface{}{"value": value},
now,
)
if err == nil {
output <- y
if m.config.SendAbsoluteValues {
msg, err := lp.NewMessage(
"numastats_"+key,
t.tagSet,
m.meta,
map[string]interface{}{"value": value},
now,
)
if err == nil {
output <- msg
}
}
if m.config.SendDerivedValues {
prev, ok := t.previousValues[key]
if ok {
rate := float64(value-prev) / timeDiff
msg, err := lp.NewMessage(
"numastats_"+key+"_rate",
t.tagSet,
m.meta,
map[string]interface{}{"value": rate},
now,
)
if err == nil {
output <- msg
}
}
t.previousValues[key] = value
}
}
file.Close()
}
}

View File

@@ -1,15 +1,26 @@
## `numastat` collector
```json
"numastat": {}
"numastats": {
"send_abs_values" : true,
"send_derived_values" : true
}
```
The `numastat` collector reads data from `/sys/devices/system/node/node*/numastat` and outputs a handful **memoryDomain** metrics. See: https://www.kernel.org/doc/html/latest/admin-guide/numastat.html
The `numastat` collector reads data from `/sys/devices/system/node/node*/numastat` and outputs a handful **memoryDomain** metrics. See: <https://www.kernel.org/doc/html/latest/admin-guide/numastat.html>
Metrics:
* `numastats_numa_hit`: A process wanted to allocate memory from this node, and succeeded.
* `numastats_numa_miss`: A process wanted to allocate memory from another node, but ended up with memory from this node.
* `numastats_numa_foreign`: A process wanted to allocate on this node, but ended up with memory from another node.
* `numastats_local_node`: A process ran on this node's CPU, and got memory from this node.
* `numastats_other_node`: A process ran on a different node's CPU, and got memory from this node.
* `numastats_interleave_hit`: Interleaving wanted to allocate from this node and succeeded.
* `numastats_interleave_hit`: Interleaving wanted to allocate from this node and succeeded.
* `numastats_numa_hit_rate` (if `send_derived_values == true`): Derived rate value per second.
* `numastats_numa_miss_rate` (if `send_derived_values == true`): Derived rate value per second.
* `numastats_numa_foreign_rate` (if `send_derived_values == true`): Derived rate value per second.
* `numastats_local_node_rate` (if `send_derived_values == true`): Derived rate value per second.
* `numastats_other_node_rate` (if `send_derived_values == true`): Derived rate value per second.
* `numastats_interleave_hit_rate` (if `send_derived_values == true`): Derived rate value per second.

File diff suppressed because it is too large Load Diff

View File

@@ -3,38 +3,74 @@
```json
"nvidia": {
"exclude_devices" : [
"0","1"
"exclude_devices": [
"0","1", "0000000:ff:01.0"
],
"exclude_metrics": [
"nv_fb_memory",
"nv_fb_mem_used",
"nv_fan"
]
],
"process_mig_devices": false,
"use_pci_info_as_type_id": true,
"add_pci_info_tag": false,
"add_uuid_meta": false,
"add_board_number_meta": false,
"add_serial_meta": false,
"use_uuid_for_mig_device": false,
"use_slice_for_mig_device": false
}
```
The `nvidia` collector can be configured to leave out specific devices with the `exclude_devices` option. It takes IDs as supplied to the NVML with `nvmlDeviceGetHandleByIndex()` or the PCI address in NVML format (`%08X:%02X:%02X.0`). Metrics (listed below) that should not be sent to the MetricRouter can be excluded with the `exclude_metrics` option. Commonly only the physical GPUs are monitored. If MIG devices should be analyzed as well, set `process_mig_devices` (adds `stype=mig,stype-id=<mig_index>`). With the options `use_uuid_for_mig_device` and `use_slice_for_mig_device`, the `<mig_index>` can be replaced with the UUID (e.g. `MIG-6a9f7cc8-6d5b-5ce0-92de-750edc4d8849`) or the MIG slice name (e.g. `1g.5gb`).
The metrics sent by the `nvidia` collector use `accelerator` as `type` tag. For the `type-id`, it uses the device handle index by default. With the `use_pci_info_as_type_id` option, the PCI ID is used instead. If both values should be added as tags, activate the `add_pci_info_tag` option. It uses the device handle index as `type-id` and adds the PCI ID as separate `pci_identifier` tag.
Optionally, it is possible to add the UUID, the board part number and the serial to the meta informations. They are not sent to the sinks (if not configured otherwise).
Metrics:
* `nv_util`
* `nv_mem_util`
* `nv_mem_total`
* `nv_fb_memory`
* `nv_fb_mem_total`
* `nv_fb_mem_used`
* `nv_bar1_mem_total`
* `nv_bar1_mem_used`
* `nv_temp`
* `nv_fan`
* `nv_ecc_mode`
* `nv_perf_state`
* `nv_power_usage_report`
* `nv_graphics_clock_report`
* `nv_sm_clock_report`
* `nv_mem_clock_report`
* `nv_power_usage`
* `nv_graphics_clock`
* `nv_sm_clock`
* `nv_mem_clock`
* `nv_video_clock`
* `nv_max_graphics_clock`
* `nv_max_sm_clock`
* `nv_max_mem_clock`
* `nv_ecc_db_error`
* `nv_ecc_sb_error`
* `nv_power_man_limit`
* `nv_max_video_clock`
* `nv_ecc_uncorrected_error`
* `nv_ecc_corrected_error`
* `nv_power_max_limit`
* `nv_encoder_util`
* `nv_decoder_util`
* `nv_remapped_rows_corrected`
* `nv_remapped_rows_uncorrected`
* `nv_remapped_rows_pending`
* `nv_remapped_rows_failure`
* `nv_compute_processes`
* `nv_graphics_processes`
* `nv_violation_power`
* `nv_violation_thermal`
* `nv_violation_sync_boost`
* `nv_violation_board_limit`
* `nv_violation_low_util`
* `nv_violation_reliability`
* `nv_violation_below_app_clock`
* `nv_violation_below_base_clock`
* `nv_nvlink_crc_flit_errors`
* `nv_nvlink_crc_errors`
* `nv_nvlink_ecc_errors`
* `nv_nvlink_replay_errors`
* `nv_nvlink_recovery_errors`
It uses a separate `type` in the metrics. The output metric looks like this:
`<name>,type=accelerator,type-id=<nvidia-gpu-id> value=<metric value> <timestamp>`
Some metrics add the additional sub type tag (`stype`) like the `nv_nvlink_*` metrics set `stype=nvlink,stype-id=<link_number>`.

262
collectors/raplMetric.go Normal file
View File

@@ -0,0 +1,262 @@
package collectors
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// running average power limit (RAPL) monitoring attributes for a zone
type RAPLZoneInfo struct {
// tags describing the RAPL zone:
// * zone_name, subzone_name: e.g. psys, dram, core, uncore, package-0
// * zone_id: e.g. 0:1 (zone 0 sub zone 1)
tags map[string]string
energyFilepath string // path to a file containing the zones current energy counter in micro joules
energy int64 // current reading of the energy counter in micro joules
energyTimestamp time.Time // timestamp when energy counter was read
maxEnergyRange int64 // Range of the above energy counter in micro-joules
}
type RAPLCollector struct {
metricCollector
config struct {
// Exclude IDs for RAPL zones, e.g.
// * 0 for zone 0
// * 0:1 for zone 0 subzone 1
ExcludeByID []string `json:"exclude_device_by_id,omitempty"`
// Exclude names for RAPL zones, e.g. psys, dram, core, uncore, package-0
ExcludeByName []string `json:"exclude_device_by_name,omitempty"`
}
RAPLZoneInfo []RAPLZoneInfo
meta map[string]string // default meta information
}
// Init initializes the running average power limit (RAPL) collector
func (m *RAPLCollector) Init(config json.RawMessage) error {
// Check if already initialized
if m.init {
return nil
}
var err error = nil
m.name = "RAPLCollector"
m.setup()
m.parallel = true
m.meta = map[string]string{
"source": m.name,
"group": "energy",
"unit": "Watt",
}
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
// Configure excluded RAPL zones
isIDExcluded := make(map[string]bool)
if m.config.ExcludeByID != nil {
for _, ID := range m.config.ExcludeByID {
isIDExcluded[ID] = true
}
}
isNameExcluded := make(map[string]bool)
if m.config.ExcludeByName != nil {
for _, name := range m.config.ExcludeByName {
isNameExcluded[name] = true
}
}
// readZoneInfo reads RAPL monitoring attributes for a zone given by zonePath
// See: https://www.kernel.org/doc/html/latest/power/powercap/powercap.html#monitoring-attributes
readZoneInfo := func(zonePath string) (z struct {
name string // zones name e.g. psys, dram, core, uncore, package-0
energyFilepath string // path to a file containing the zones current energy counter in micro joules
energy int64 // current reading of the energy counter in micro joules
energyTimestamp time.Time // timestamp when energy counter was read
maxEnergyRange int64 // Range of the above energy counter in micro-joules
ok bool // Are all information available?
}) {
// zones name e.g. psys, dram, core, uncore, package-0
foundName := false
if v, err :=
os.ReadFile(
filepath.Join(zonePath, "name")); err == nil {
foundName = true
z.name = strings.TrimSpace(string(v))
}
// path to a file containing the zones current energy counter in micro joules
z.energyFilepath = filepath.Join(zonePath, "energy_uj")
// current reading of the energy counter in micro joules
foundEnergy := false
if v, err := os.ReadFile(z.energyFilepath); err == nil {
// timestamp when energy counter was read
z.energyTimestamp = time.Now()
if i, err := strconv.ParseInt(strings.TrimSpace(string(v)), 10, 64); err == nil {
foundEnergy = true
z.energy = i
}
}
// Range of the above energy counter in micro-joules
foundMaxEnergyRange := false
if v, err :=
os.ReadFile(
filepath.Join(zonePath, "max_energy_range_uj")); err == nil {
if i, err := strconv.ParseInt(strings.TrimSpace(string(v)), 10, 64); err == nil {
foundMaxEnergyRange = true
z.maxEnergyRange = i
}
}
// Are all information available?
z.ok = foundName && foundEnergy && foundMaxEnergyRange
return
}
powerCapPrefix := "/sys/devices/virtual/powercap"
controlType := "intel-rapl"
controlTypePath := filepath.Join(powerCapPrefix, controlType)
// Find all RAPL zones
zonePrefix := filepath.Join(controlTypePath, controlType+":")
zonesPath, err := filepath.Glob(zonePrefix + "*")
if err != nil || zonesPath == nil {
return fmt.Errorf("unable to find any zones under %s", controlTypePath)
}
for _, zonePath := range zonesPath {
zoneID := strings.TrimPrefix(zonePath, zonePrefix)
z := readZoneInfo(zonePath)
if z.ok &&
!isIDExcluded[zoneID] &&
!isNameExcluded[z.name] {
// Add RAPL monitoring attributes for a zone
m.RAPLZoneInfo =
append(
m.RAPLZoneInfo,
RAPLZoneInfo{
tags: map[string]string{
"id": zoneID,
"zone_name": z.name,
},
energyFilepath: z.energyFilepath,
energy: z.energy,
energyTimestamp: z.energyTimestamp,
maxEnergyRange: z.maxEnergyRange,
})
}
// find all sub zones for the given zone
subZonePrefix := filepath.Join(zonePath, controlType+":"+zoneID+":")
subZonesPath, err := filepath.Glob(subZonePrefix + "*")
if err != nil || subZonesPath == nil {
continue
}
for _, subZonePath := range subZonesPath {
subZoneID := strings.TrimPrefix(subZonePath, subZonePrefix)
sz := readZoneInfo(subZonePath)
if len(zoneID) > 0 && len(z.name) > 0 &&
sz.ok &&
!isIDExcluded[zoneID+":"+subZoneID] &&
!isNameExcluded[sz.name] {
m.RAPLZoneInfo =
append(
m.RAPLZoneInfo,
RAPLZoneInfo{
tags: map[string]string{
"id": zoneID + ":" + subZoneID,
"zone_name": z.name,
"sub_zone_name": sz.name,
},
energyFilepath: sz.energyFilepath,
energy: sz.energy,
energyTimestamp: sz.energyTimestamp,
maxEnergyRange: sz.maxEnergyRange,
})
}
}
}
if m.RAPLZoneInfo == nil {
return fmt.Errorf("no running average power limit (RAPL) device found in %s", controlTypePath)
}
// Initialized
cclog.ComponentDebug(
m.name,
"initialized",
len(m.RAPLZoneInfo),
"zones with running average power limit (RAPL) monitoring attributes")
m.init = true
return err
}
// Read reads running average power limit (RAPL) monitoring attributes for all initialized zones
// See: https://www.kernel.org/doc/html/latest/power/powercap/powercap.html#monitoring-attributes
func (m *RAPLCollector) Read(interval time.Duration, output chan lp.CCMessage) {
for i := range m.RAPLZoneInfo {
p := &m.RAPLZoneInfo[i]
// Read current value of the energy counter in micro joules
if v, err := os.ReadFile(p.energyFilepath); err == nil {
energyTimestamp := time.Now()
if i, err := strconv.ParseInt(strings.TrimSpace(string(v)), 10, 64); err == nil {
energy := i
// Compute average power (Δ energy / Δ time)
energyDiff := energy - p.energy
if energyDiff < 0 {
// Handle overflow:
// ( p.maxEnergyRange - p.energy ) + energy
// = p.maxEnergyRange + ( energy - p.energy )
// = p.maxEnergyRange + diffEnergy
energyDiff += p.maxEnergyRange
}
timeDiff := energyTimestamp.Sub(p.energyTimestamp)
averagePower := float64(energyDiff) / float64(timeDiff.Microseconds())
y, err := lp.NewMessage(
"rapl_average_power",
p.tags,
m.meta,
map[string]interface{}{"value": averagePower},
energyTimestamp)
if err == nil {
output <- y
}
// Save current energy counter state
p.energy = energy
p.energyTimestamp = energyTimestamp
}
}
}
}
// Close closes running average power limit (RAPL) metric collector
func (m *RAPLCollector) Close() {
// Unset flag
m.init = false
}

15
collectors/raplMetric.md Normal file
View File

@@ -0,0 +1,15 @@
## `rapl` collector
This collector reads running average power limit (RAPL) monitoring attributes to compute average power consumption metrics. See <https://www.kernel.org/doc/html/latest/power/powercap/powercap.html#monitoring-attributes>.
The Likwid metric collector provides similar functionality.
```json
"rapl": {
"exclude_device_by_id": ["0:1", "0:2"],
"exclude_device_by_name": ["psys"]
}
```
Metrics:
* `rapl_average_power`: average power consumption in Watt. The average is computed over the entire runtime from the last measurement to the current measurement

319
collectors/rocmsmiMetric.go Normal file
View File

@@ -0,0 +1,319 @@
package collectors
import (
"encoding/json"
"errors"
"fmt"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
"github.com/ClusterCockpit/go-rocm-smi/pkg/rocm_smi"
)
type RocmSmiCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
ExcludeDevices []string `json:"exclude_devices,omitempty"`
AddPciInfoTag bool `json:"add_pci_info_tag,omitempty"`
UsePciInfoAsTypeId bool `json:"use_pci_info_as_type_id,omitempty"`
AddSerialMeta bool `json:"add_serial_meta,omitempty"`
}
type RocmSmiCollectorDevice struct {
device rocm_smi.DeviceHandle
index int
tags map[string]string // default tags
meta map[string]string // default meta information
excludeMetrics map[string]bool // copy of exclude metrics from config
}
type RocmSmiCollector struct {
metricCollector
config RocmSmiCollectorConfig // the configuration structure
devices []RocmSmiCollectorDevice
}
// Functions to implement MetricCollector interface
// Init(...), Read(...), Close()
// See: metricCollector.go
// Init initializes the sample collector
// Called once by the collector manager
// All tags, meta data tags and metrics that do not change over the runtime should be set here
func (m *RocmSmiCollector) Init(config json.RawMessage) error {
var err error = nil
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "RocmSmiCollector"
// This is for later use, also call it early
m.setup()
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
//m.meta = map[string]string{"source": m.name, "group": "AMD"}
// Define tags sent with each metric
// The 'type' tag is always needed, it defines the granulatity of the metric
// node -> whole system
// socket -> CPU socket (requires socket ID as 'type-id' tag)
// cpu -> single CPU hardware thread (requires cpu ID as 'type-id' tag)
//m.tags = map[string]string{"type": "node"}
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
ret := rocm_smi.Init()
if ret != rocm_smi.STATUS_SUCCESS {
err = errors.New("failed to initialize ROCm SMI library")
cclog.ComponentError(m.name, err.Error())
return err
}
numDevs, ret := rocm_smi.NumMonitorDevices()
if ret != rocm_smi.STATUS_SUCCESS {
err = errors.New("failed to get number of GPUs from ROCm SMI library")
cclog.ComponentError(m.name, err.Error())
return err
}
exclDev := func(s string) bool {
skip_device := false
for _, excl := range m.config.ExcludeDevices {
if excl == s {
skip_device = true
break
}
}
return skip_device
}
m.devices = make([]RocmSmiCollectorDevice, 0)
for i := 0; i < numDevs; i++ {
str_i := fmt.Sprintf("%d", i)
if exclDev(str_i) {
continue
}
device, ret := rocm_smi.DeviceGetHandleByIndex(i)
if ret != rocm_smi.STATUS_SUCCESS {
err = fmt.Errorf("failed to get handle for GPU %d", i)
cclog.ComponentError(m.name, err.Error())
return err
}
pciInfo, ret := rocm_smi.DeviceGetPciInfo(device)
if ret != rocm_smi.STATUS_SUCCESS {
err = fmt.Errorf("failed to get PCI information for GPU %d", i)
cclog.ComponentError(m.name, err.Error())
return err
}
pciId := fmt.Sprintf(
"%08X:%02X:%02X.%X",
pciInfo.Domain,
pciInfo.Bus,
pciInfo.Device,
pciInfo.Function)
if exclDev(pciId) {
continue
}
dev := RocmSmiCollectorDevice{
device: device,
tags: map[string]string{
"type": "accelerator",
"type-id": str_i,
},
meta: map[string]string{
"source": m.name,
"group": "AMD",
},
}
if m.config.UsePciInfoAsTypeId {
dev.tags["type-id"] = pciId
} else if m.config.AddPciInfoTag {
dev.tags["pci_identifier"] = pciId
}
if m.config.AddSerialMeta {
serial, ret := rocm_smi.DeviceGetSerialNumber(device)
if ret != rocm_smi.STATUS_SUCCESS {
cclog.ComponentError(m.name, "Unable to get serial number for device at index", i, ":", rocm_smi.StatusStringNoError(ret))
} else {
dev.meta["serial"] = serial
}
}
// Add excluded metrics
dev.excludeMetrics = map[string]bool{}
for _, e := range m.config.ExcludeMetrics {
dev.excludeMetrics[e] = true
}
dev.index = i
m.devices = append(m.devices, dev)
}
// Set this flag only if everything is initialized properly, all required files exist, ...
m.init = true
return err
}
// Read collects all metrics belonging to the sample collector
// and sends them through the output channel to the collector manager
func (m *RocmSmiCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Create a sample metric
timestamp := time.Now()
for _, dev := range m.devices {
metrics, ret := rocm_smi.DeviceGetMetrics(dev.device)
if ret != rocm_smi.STATUS_SUCCESS {
cclog.ComponentError(m.name, "Unable to get metrics for device at index", dev.index, ":", rocm_smi.StatusStringNoError(ret))
continue
}
if !dev.excludeMetrics["rocm_gfx_util"] {
value := metrics.Average_gfx_activity
y, err := lp.NewMessage("rocm_gfx_util", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_umc_util"] {
value := metrics.Average_umc_activity
y, err := lp.NewMessage("rocm_umc_util", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_mm_util"] {
value := metrics.Average_mm_activity
y, err := lp.NewMessage("rocm_mm_util", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_avg_power"] {
value := metrics.Average_socket_power
y, err := lp.NewMessage("rocm_avg_power", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_mem"] {
value := metrics.Temperature_mem
y, err := lp.NewMessage("rocm_temp_mem", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_hotspot"] {
value := metrics.Temperature_hotspot
y, err := lp.NewMessage("rocm_temp_hotspot", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_edge"] {
value := metrics.Temperature_edge
y, err := lp.NewMessage("rocm_temp_edge", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_vrgfx"] {
value := metrics.Temperature_vrgfx
y, err := lp.NewMessage("rocm_temp_vrgfx", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_vrsoc"] {
value := metrics.Temperature_vrsoc
y, err := lp.NewMessage("rocm_temp_vrsoc", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_vrmem"] {
value := metrics.Temperature_vrmem
y, err := lp.NewMessage("rocm_temp_vrmem", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_gfx_clock"] {
value := metrics.Average_gfxclk_frequency
y, err := lp.NewMessage("rocm_gfx_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_soc_clock"] {
value := metrics.Average_socclk_frequency
y, err := lp.NewMessage("rocm_soc_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_u_clock"] {
value := metrics.Average_uclk_frequency
y, err := lp.NewMessage("rocm_u_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_v0_clock"] {
value := metrics.Average_vclk0_frequency
y, err := lp.NewMessage("rocm_v0_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_v1_clock"] {
value := metrics.Average_vclk1_frequency
y, err := lp.NewMessage("rocm_v1_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_d0_clock"] {
value := metrics.Average_dclk0_frequency
y, err := lp.NewMessage("rocm_d0_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_d1_clock"] {
value := metrics.Average_dclk1_frequency
y, err := lp.NewMessage("rocm_d1_clock", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
output <- y
}
}
if !dev.excludeMetrics["rocm_temp_hbm"] {
for i := 0; i < rocm_smi.NUM_HBM_INSTANCES; i++ {
value := metrics.Temperature_hbm[i]
y, err := lp.NewMessage("rocm_temp_hbm", dev.tags, dev.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
y.AddTag("stype", "device")
y.AddTag("stype-id", fmt.Sprintf("%d", i))
output <- y
}
}
}
}
}
// Close metric collector: close network connection, close files, close libraries, ...
// Called once by the collector manager
func (m *RocmSmiCollector) Close() {
// Unset flag
ret := rocm_smi.Shutdown()
if ret != rocm_smi.STATUS_SUCCESS {
cclog.ComponentError(m.name, "Failed to shutdown ROCm SMI library")
}
m.init = false
}

View File

@@ -0,0 +1,47 @@
## `rocm_smi` collector
```json
"rocm_smi": {
"exclude_devices": [
"0","1", "0000000:ff:01.0"
],
"exclude_metrics": [
"rocm_mm_util",
"rocm_temp_vrsoc"
],
"use_pci_info_as_type_id": true,
"add_pci_info_tag": false,
"add_serial_meta": false,
}
```
The `rocm_smi` collector can be configured to leave out specific devices with the `exclude_devices` option. It takes logical IDs in the list of available devices or the PCI address similar to NVML format (`%08X:%02X:%02X.0`). Metrics (listed below) that should not be sent to the MetricRouter can be excluded with the `exclude_metrics` option.
The metrics sent by the `rocm_smi` collector use `accelerator` as `type` tag. For the `type-id`, it uses the device handle index by default. With the `use_pci_info_as_type_id` option, the PCI ID is used instead. If both values should be added as tags, activate the `add_pci_info_tag` option. It uses the device handle index as `type-id` and adds the PCI ID as separate `pci_identifier` tag.
Optionally, it is possible to add the serial to the meta informations. They are not sent to the sinks (if not configured otherwise).
Metrics:
* `rocm_gfx_util`
* `rocm_umc_util`
* `rocm_mm_util`
* `rocm_avg_power`
* `rocm_temp_mem`
* `rocm_temp_hotspot`
* `rocm_temp_edge`
* `rocm_temp_vrgfx`
* `rocm_temp_vrsoc`
* `rocm_temp_vrmem`
* `rocm_gfx_clock`
* `rocm_soc_clock`
* `rocm_u_clock`
* `rocm_v0_clock`
* `rocm_v1_clock`
* `rocm_d0_clock`
* `rocm_d1_clock`
* `rocm_temp_hbm`
Some metrics add the additional sub type tag (`stype`) like the `rocm_temp_hbm` metrics set `stype=device,stype-id=<HBM_slice_number>`.

101
collectors/sampleMetric.go Normal file
View File

@@ -0,0 +1,101 @@
package collectors
import (
"encoding/json"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// These are the fields we read from the JSON configuration
type SampleCollectorConfig struct {
Interval string `json:"interval"`
}
// This contains all variables we need during execution and the variables
// defined by metricCollector (name, init, ...)
type SampleCollector struct {
metricCollector
config SampleCollectorConfig // the configuration structure
meta map[string]string // default meta information
tags map[string]string // default tags
}
// Functions to implement MetricCollector interface
// Init(...), Read(...), Close()
// See: metricCollector.go
// Init initializes the sample collector
// Called once by the collector manager
// All tags, meta data tags and metrics that do not change over the runtime should be set here
func (m *SampleCollector) Init(config json.RawMessage) error {
var err error = nil
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "SampleCollector"
// This is for later use, also call it early
m.setup()
// Tell whether the collector should be run in parallel with others (reading files, ...)
// or it should be run serially, mostly for collectors actually doing measurements
// because they should not measure the execution of the other collectors
m.parallel = true
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
m.meta = map[string]string{"source": m.name, "group": "SAMPLE"}
// Define tags sent with each metric
// The 'type' tag is always needed, it defines the granularity of the metric
// node -> whole system
// socket -> CPU socket (requires socket ID as 'type-id' tag)
// die -> CPU die (requires CPU die ID as 'type-id' tag)
// memoryDomain -> NUMA domain (requires NUMA domain ID as 'type-id' tag)
// llc -> Last level cache (requires last level cache ID as 'type-id' tag)
// core -> single CPU core that may consist of multiple hardware threads (SMT) (requires core ID as 'type-id' tag)
// hwthtread -> single CPU hardware thread (requires hardware thread ID as 'type-id' tag)
// accelerator -> A accelerator device like GPU or FPGA (requires an accelerator ID as 'type-id' tag)
m.tags = map[string]string{"type": "node"}
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
// Set up everything that the collector requires during the Read() execution
// Check files required, test execution of some commands, create data structure
// for all topological entities (sockets, NUMA domains, ...)
// Return some useful error message in case of any failures
// Set this flag only if everything is initialized properly, all required files exist, ...
m.init = true
return err
}
// Read collects all metrics belonging to the sample collector
// and sends them through the output channel to the collector manager
func (m *SampleCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Create a sample metric
timestamp := time.Now()
value := 1.0
// If you want to measure something for a specific amount of time, use interval
// start := readState()
// time.Sleep(interval)
// stop := readState()
// value = (stop - start) / interval.Seconds()
y, err := lp.NewMessage("sample_metric", m.tags, m.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil {
// Send it to output channel
output <- y
}
}
// Close metric collector: close network connection, close files, close libraries, ...
// Called once by the collector manager
func (m *SampleCollector) Close() {
// Unset flag
m.init = false
}

View File

@@ -0,0 +1,122 @@
package collectors
import (
"encoding/json"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// These are the fields we read from the JSON configuration
type SampleTimerCollectorConfig struct {
Interval string `json:"interval"`
}
// This contains all variables we need during execution and the variables
// defined by metricCollector (name, init, ...)
type SampleTimerCollector struct {
metricCollector
wg sync.WaitGroup // sync group for management
done chan bool // channel for management
meta map[string]string // default meta information
tags map[string]string // default tags
config SampleTimerCollectorConfig // the configuration structure
interval time.Duration // the interval parsed from configuration
ticker *time.Ticker // own timer
output chan lp.CCMessage // own internal output channel
}
func (m *SampleTimerCollector) Init(name string, config json.RawMessage) error {
var err error = nil
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "SampleTimerCollector"
// This is for later use, also call it early
m.setup()
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
m.meta = map[string]string{"source": m.name, "group": "SAMPLE"}
// Define tags sent with each metric
// The 'type' tag is always needed, it defines the granularity of the metric
// node -> whole system
// socket -> CPU socket (requires socket ID as 'type-id' tag)
// cpu -> single CPU hardware thread (requires cpu ID as 'type-id' tag)
m.tags = map[string]string{"type": "node"}
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
// Parse the read interval duration
m.interval, err = time.ParseDuration(m.config.Interval)
if err != nil {
cclog.ComponentError(m.name, "Error parsing interval:", err.Error())
return err
}
// Storage for output channel
m.output = nil
// Management channel for the timer function.
m.done = make(chan bool)
// Create the own ticker
m.ticker = time.NewTicker(m.interval)
// Start the timer loop with return functionality by sending 'true' to the done channel
m.wg.Add(1)
go func() {
select {
case <-m.done:
// Exit the timer loop
cclog.ComponentDebug(m.name, "Closing...")
m.wg.Done()
return
case timestamp := <-m.ticker.C:
// This is executed every timer tick but we have to wait until the first
// Read() to get the output channel
if m.output != nil {
m.ReadMetrics(timestamp)
}
}
}()
// Set this flag only if everything is initialized properly, all required files exist, ...
m.init = true
return err
}
// This function is called at each interval timer tick
func (m *SampleTimerCollector) ReadMetrics(timestamp time.Time) {
// Create a sample metric
value := 1.0
// If you want to measure something for a specific amount of time, use interval
// start := readState()
// time.Sleep(interval)
// stop := readState()
// value = (stop - start) / interval.Seconds()
y, err := lp.NewMessage("sample_metric", m.tags, m.meta, map[string]interface{}{"value": value}, timestamp)
if err == nil && m.output != nil {
// Send it to output channel if we have a valid channel
m.output <- y
}
}
func (m *SampleTimerCollector) Read(interval time.Duration, output chan lp.CCMessage) {
// Capture output channel
m.output = output
}
func (m *SampleTimerCollector) Close() {
// Send signal to the timer loop to stop it
m.done <- true
// Wait until the timer loop is done
m.wg.Wait()
// Unset flag
m.init = false
}

View File

@@ -0,0 +1,154 @@
package collectors
import (
"bufio"
"encoding/json"
"fmt"
"math"
"os"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const SCHEDSTATFILE = `/proc/schedstat`
// These are the fields we read from the JSON configuration
type SchedstatCollectorConfig struct {
ExcludeMetrics []string `json:"exclude_metrics,omitempty"`
}
// This contains all variables we need during execution and the variables
// defined by metricCollector (name, init, ...)
type SchedstatCollector struct {
metricCollector
config SchedstatCollectorConfig // the configuration structure
lastTimestamp time.Time // Store time stamp of last tick to derive values
meta map[string]string // default meta information
cputags map[string]map[string]string // default tags
olddata map[string]map[string]int64 // default tags
}
// Functions to implement MetricCollector interface
// Init(...), Read(...), Close()
// See: metricCollector.go
// Init initializes the sample collector
// Called once by the collector manager
// All tags, meta data tags and metrics that do not change over the runtime should be set here
func (m *SchedstatCollector) Init(config json.RawMessage) error {
var err error = nil
// Always set the name early in Init() to use it in cclog.Component* functions
m.name = "SchedstatCollector"
// This is for later use, also call it early
m.setup()
// Tell whether the collector should be run in parallel with others (reading files, ...)
// or it should be run serially, mostly for collectors acutally doing measurements
// because they should not measure the execution of the other collectors
m.parallel = true
// Define meta information sent with each metric
// (Can also be dynamic or this is the basic set with extension through AddMeta())
m.meta = map[string]string{"source": m.name, "group": "SCHEDSTAT"}
// Read in the JSON configuration
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
// Check input file
file, err := os.Open(string(SCHEDSTATFILE))
if err != nil {
cclog.ComponentError(m.name, err.Error())
}
defer file.Close()
// Pre-generate tags for all CPUs
num_cpus := 0
m.cputags = make(map[string]map[string]string)
m.olddata = make(map[string]map[string]int64)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
linefields := strings.Fields(line)
if strings.HasPrefix(linefields[0], "cpu") && strings.Compare(linefields[0], "cpu") != 0 {
cpustr := strings.TrimLeft(linefields[0], "cpu")
cpu, _ := strconv.Atoi(cpustr)
running, _ := strconv.ParseInt(linefields[7], 10, 64)
waiting, _ := strconv.ParseInt(linefields[8], 10, 64)
m.cputags[linefields[0]] = map[string]string{"type": "hwthread", "type-id": fmt.Sprintf("%d", cpu)}
m.olddata[linefields[0]] = map[string]int64{"running": running, "waiting": waiting}
num_cpus++
}
}
// Save current timestamp
m.lastTimestamp = time.Now()
// Set this flag only if everything is initialized properly, all required files exist, ...
m.init = true
return err
}
func (m *SchedstatCollector) ParseProcLine(linefields []string, tags map[string]string, output chan lp.CCMessage, now time.Time, tsdelta time.Duration) {
running, _ := strconv.ParseInt(linefields[7], 10, 64)
waiting, _ := strconv.ParseInt(linefields[8], 10, 64)
diff_running := running - m.olddata[linefields[0]]["running"]
diff_waiting := waiting - m.olddata[linefields[0]]["waiting"]
var l_running float64 = float64(diff_running) / tsdelta.Seconds() / (math.Pow(1000, 3))
var l_waiting float64 = float64(diff_waiting) / tsdelta.Seconds() / (math.Pow(1000, 3))
m.olddata[linefields[0]]["running"] = running
m.olddata[linefields[0]]["waiting"] = waiting
value := l_running + l_waiting
y, err := lp.NewMessage("cpu_load_core", tags, m.meta, map[string]interface{}{"value": value}, now)
if err == nil {
// Send it to output channel
output <- y
}
}
// Read collects all metrics belonging to the sample collector
// and sends them through the output channel to the collector manager
func (m *SchedstatCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
//timestamps
now := time.Now()
tsdelta := now.Sub(m.lastTimestamp)
file, err := os.Open(string(SCHEDSTATFILE))
if err != nil {
cclog.ComponentError(m.name, err.Error())
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
linefields := strings.Fields(line)
if strings.HasPrefix(linefields[0], "cpu") {
m.ParseProcLine(linefields, m.cputags[linefields[0]], output, now, tsdelta)
}
}
m.lastTimestamp = now
}
// Close metric collector: close network connection, close files, close libraries, ...
// Called once by the collector manager
func (m *SchedstatCollector) Close() {
// Unset flag
m.init = false
}

View File

@@ -0,0 +1,11 @@
## `schedstat` collector
```json
"schedstat": {
}
```
The `schedstat` collector reads data from /proc/schedstat and calculates a load value, separated by hwthread. This might be useful to detect bad cpu pinning on shared nodes etc.
Metric:
* `cpu_load_core`

144
collectors/selfMetric.go Normal file
View File

@@ -0,0 +1,144 @@
package collectors
import (
"encoding/json"
"runtime"
"syscall"
"time"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
type SelfCollectorConfig struct {
MemStats bool `json:"read_mem_stats"`
GoRoutines bool `json:"read_goroutines"`
CgoCalls bool `json:"read_cgo_calls"`
Rusage bool `json:"read_rusage"`
}
type SelfCollector struct {
metricCollector
config SelfCollectorConfig // the configuration structure
meta map[string]string // default meta information
tags map[string]string // default tags
}
func (m *SelfCollector) Init(config json.RawMessage) error {
var err error = nil
m.name = "SelfCollector"
m.setup()
m.parallel = true
m.meta = map[string]string{"source": m.name, "group": "Self"}
m.tags = map[string]string{"type": "node"}
if len(config) > 0 {
err = json.Unmarshal(config, &m.config)
if err != nil {
cclog.ComponentError(m.name, "Error reading config:", err.Error())
return err
}
}
m.init = true
return err
}
func (m *SelfCollector) Read(interval time.Duration, output chan lp.CCMessage) {
timestamp := time.Now()
if m.config.MemStats {
var memstats runtime.MemStats
runtime.ReadMemStats(&memstats)
y, err := lp.NewMessage("total_alloc", m.tags, m.meta, map[string]interface{}{"value": memstats.TotalAlloc}, timestamp)
if err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
y, err = lp.NewMessage("heap_alloc", m.tags, m.meta, map[string]interface{}{"value": memstats.HeapAlloc}, timestamp)
if err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
y, err = lp.NewMessage("heap_sys", m.tags, m.meta, map[string]interface{}{"value": memstats.HeapSys}, timestamp)
if err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
y, err = lp.NewMessage("heap_idle", m.tags, m.meta, map[string]interface{}{"value": memstats.HeapIdle}, timestamp)
if err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
y, err = lp.NewMessage("heap_inuse", m.tags, m.meta, map[string]interface{}{"value": memstats.HeapInuse}, timestamp)
if err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
y, err = lp.NewMessage("heap_released", m.tags, m.meta, map[string]interface{}{"value": memstats.HeapReleased}, timestamp)
if err == nil {
y.AddMeta("unit", "Bytes")
output <- y
}
y, err = lp.NewMessage("heap_objects", m.tags, m.meta, map[string]interface{}{"value": memstats.HeapObjects}, timestamp)
if err == nil {
output <- y
}
}
if m.config.GoRoutines {
y, err := lp.NewMessage("num_goroutines", m.tags, m.meta, map[string]interface{}{"value": runtime.NumGoroutine()}, timestamp)
if err == nil {
output <- y
}
}
if m.config.CgoCalls {
y, err := lp.NewMessage("num_cgo_calls", m.tags, m.meta, map[string]interface{}{"value": runtime.NumCgoCall()}, timestamp)
if err == nil {
output <- y
}
}
if m.config.Rusage {
var rusage syscall.Rusage
err := syscall.Getrusage(syscall.RUSAGE_SELF, &rusage)
if err == nil {
sec, nsec := rusage.Utime.Unix()
t := float64(sec) + (float64(nsec) * 1e-9)
y, err := lp.NewMessage("rusage_user_time", m.tags, m.meta, map[string]interface{}{"value": t}, timestamp)
if err == nil {
y.AddMeta("unit", "seconds")
output <- y
}
sec, nsec = rusage.Stime.Unix()
t = float64(sec) + (float64(nsec) * 1e-9)
y, err = lp.NewMessage("rusage_system_time", m.tags, m.meta, map[string]interface{}{"value": t}, timestamp)
if err == nil {
y.AddMeta("unit", "seconds")
output <- y
}
y, err = lp.NewMessage("rusage_vol_ctx_switch", m.tags, m.meta, map[string]interface{}{"value": rusage.Nvcsw}, timestamp)
if err == nil {
output <- y
}
y, err = lp.NewMessage("rusage_invol_ctx_switch", m.tags, m.meta, map[string]interface{}{"value": rusage.Nivcsw}, timestamp)
if err == nil {
output <- y
}
y, err = lp.NewMessage("rusage_signals", m.tags, m.meta, map[string]interface{}{"value": rusage.Nsignals}, timestamp)
if err == nil {
output <- y
}
y, err = lp.NewMessage("rusage_major_pgfaults", m.tags, m.meta, map[string]interface{}{"value": rusage.Majflt}, timestamp)
if err == nil {
output <- y
}
y, err = lp.NewMessage("rusage_minor_pgfaults", m.tags, m.meta, map[string]interface{}{"value": rusage.Minflt}, timestamp)
if err == nil {
output <- y
}
}
}
}
func (m *SelfCollector) Close() {
m.init = false
}

34
collectors/selfMetric.md Normal file
View File

@@ -0,0 +1,34 @@
## `self` collector
```json
"self": {
"read_mem_stats" : true,
"read_goroutines" : true,
"read_cgo_calls" : true,
"read_rusage" : true
}
```
The `self` collector reads the data from the `runtime` and `syscall` packages, so monitors the execution of the cc-metric-collector itself.
Metrics:
* If `read_mem_stats == true`:
* `total_alloc`: The metric reports cumulative bytes allocated for heap objects.
* `heap_alloc`: The metric reports bytes of allocated heap objects.
* `heap_sys`: The metric reports bytes of heap memory obtained from the OS.
* `heap_idle`: The metric reports bytes in idle (unused) spans.
* `heap_inuse`: The metric reports bytes in in-use spans.
* `heap_released`: The metric reports bytes of physical memory returned to the OS.
* `heap_objects`: The metric reports the number of allocated heap objects.
* If `read_goroutines == true`:
* `num_goroutines`: The metric reports the number of goroutines that currently exist.
* If `read_cgo_calls == true`:
* `num_cgo_calls`: The metric reports the number of cgo calls made by the current process.
* If `read_rusage == true`:
* `rusage_user_time`: The metric reports the amount of time that this process has been scheduled in user mode.
* `rusage_system_time`: The metric reports the amount of time that this process has been scheduled in kernel mode.
* `rusage_vol_ctx_switch`: The metric reports the amount of voluntary context switches.
* `rusage_invol_ctx_switch`: The metric reports the amount of involuntary context switches.
* `rusage_signals`: The metric reports the number of signals received.
* `rusage_major_pgfaults`: The metric reports the number of major faults the process has made which have required loading a memory page from disk.
* `rusage_minor_pgfaults`: The metric reports the number of minor faults the process has made which have not required loading a memory page from disk.

View File

@@ -3,14 +3,14 @@ package collectors
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
// See: https://www.kernel.org/doc/html/latest/hwmon/sysfs-interface.html
@@ -50,6 +50,7 @@ func (m *TempCollector) Init(config json.RawMessage) error {
}
m.name = "TempCollector"
m.parallel = true
m.setup()
if len(config) > 0 {
err := json.Unmarshal(config, &m.config)
@@ -70,10 +71,10 @@ func (m *TempCollector) Init(config json.RawMessage) error {
globPattern := filepath.Join("/sys/class/hwmon", "*", "temp*_input")
inputFiles, err := filepath.Glob(globPattern)
if err != nil {
return fmt.Errorf("Unable to glob files with pattern '%s': %v", globPattern, err)
return fmt.Errorf("unable to glob files with pattern '%s': %v", globPattern, err)
}
if inputFiles == nil {
return fmt.Errorf("Unable to find any files with pattern '%s'", globPattern)
return fmt.Errorf("unable to find any files with pattern '%s'", globPattern)
}
// Get sensor name for each temperature sensor file
@@ -82,14 +83,14 @@ func (m *TempCollector) Init(config json.RawMessage) error {
// sensor name
nameFile := filepath.Join(filepath.Dir(file), "name")
name, err := ioutil.ReadFile(nameFile)
name, err := os.ReadFile(nameFile)
if err == nil {
sensor.name = strings.TrimSpace(string(name))
}
// sensor label
labelFile := strings.TrimSuffix(file, "_input") + "_label"
label, err := ioutil.ReadFile(labelFile)
label, err := os.ReadFile(labelFile)
if err == nil {
sensor.label = strings.TrimSpace(string(label))
}
@@ -116,6 +117,10 @@ func (m *TempCollector) Init(config json.RawMessage) error {
}
// Sensor file
_, err = os.ReadFile(file)
if err != nil {
continue
}
sensor.file = file
// Sensor tags
@@ -134,7 +139,7 @@ func (m *TempCollector) Init(config json.RawMessage) error {
// max temperature
if m.config.ReportMaxTemp {
maxTempFile := strings.TrimSuffix(file, "_input") + "_max"
if buffer, err := ioutil.ReadFile(maxTempFile); err == nil {
if buffer, err := os.ReadFile(maxTempFile); err == nil {
if x, err := strconv.ParseInt(strings.TrimSpace(string(buffer)), 10, 64); err == nil {
sensor.maxTempName = strings.Replace(sensor.metricName, "temp", "max_temp", 1)
sensor.maxTemp = x / 1000
@@ -145,7 +150,7 @@ func (m *TempCollector) Init(config json.RawMessage) error {
// critical temperature
if m.config.ReportCriticalTemp {
criticalTempFile := strings.TrimSuffix(file, "_input") + "_crit"
if buffer, err := ioutil.ReadFile(criticalTempFile); err == nil {
if buffer, err := os.ReadFile(criticalTempFile); err == nil {
if x, err := strconv.ParseInt(strings.TrimSpace(string(buffer)), 10, 64); err == nil {
sensor.critTempName = strings.Replace(sensor.metricName, "temp", "crit_temp", 1)
sensor.critTemp = x / 1000
@@ -158,7 +163,7 @@ func (m *TempCollector) Init(config json.RawMessage) error {
// Empty sensors map
if len(m.sensors) == 0 {
return fmt.Errorf("No temperature sensors found")
return fmt.Errorf("no temperature sensors found")
}
// Finished initialization
@@ -166,11 +171,11 @@ func (m *TempCollector) Init(config json.RawMessage) error {
return nil
}
func (m *TempCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *TempCollector) Read(interval time.Duration, output chan lp.CCMessage) {
for _, sensor := range m.sensors {
// Read sensor file
buffer, err := ioutil.ReadFile(sensor.file)
buffer, err := os.ReadFile(sensor.file)
if err != nil {
cclog.ComponentError(
m.name,
@@ -185,7 +190,7 @@ func (m *TempCollector) Read(interval time.Duration, output chan lp.CCMetric) {
continue
}
x /= 1000
y, err := lp.New(
y, err := lp.NewMessage(
sensor.metricName,
sensor.tags,
m.meta,
@@ -198,7 +203,7 @@ func (m *TempCollector) Read(interval time.Duration, output chan lp.CCMetric) {
// max temperature
if m.config.ReportMaxTemp && sensor.maxTemp != 0 {
y, err := lp.New(
y, err := lp.NewMessage(
sensor.maxTempName,
sensor.tags,
m.meta,
@@ -212,7 +217,7 @@ func (m *TempCollector) Read(interval time.Duration, output chan lp.CCMetric) {
// critical temperature
if m.config.ReportCriticalTemp && sensor.critTemp != 0 {
y, err := lp.New(
y, err := lp.NewMessage(
sensor.critTempName,
sensor.tags,
m.meta,

View File

@@ -9,7 +9,7 @@ import (
"strings"
"time"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
)
const MAX_NUM_PROCS = 10
@@ -28,6 +28,7 @@ type TopProcsCollector struct {
func (m *TopProcsCollector) Init(config json.RawMessage) error {
var err error
m.name = "TopProcsCollector"
m.parallel = true
m.tags = map[string]string{"type": "node"}
m.meta = map[string]string{"source": m.name, "group": "TopProcs"}
if len(config) > 0 {
@@ -39,20 +40,20 @@ func (m *TopProcsCollector) Init(config json.RawMessage) error {
m.config.Num_procs = int(DEFAULT_NUM_PROCS)
}
if m.config.Num_procs <= 0 || m.config.Num_procs > MAX_NUM_PROCS {
return errors.New(fmt.Sprintf("num_procs option must be set in 'topprocs' config (range: 1-%d)", MAX_NUM_PROCS))
return fmt.Errorf("num_procs option must be set in 'topprocs' config (range: 1-%d)", MAX_NUM_PROCS)
}
m.setup()
command := exec.Command("ps", "-Ao", "comm", "--sort=-pcpu")
command.Wait()
_, err = command.Output()
if err != nil {
return errors.New("Failed to execute command")
return errors.New("failed to execute command")
}
m.init = true
return nil
}
func (m *TopProcsCollector) Read(interval time.Duration, output chan lp.CCMetric) {
func (m *TopProcsCollector) Read(interval time.Duration, output chan lp.CCMessage) {
if !m.init {
return
}
@@ -67,7 +68,7 @@ func (m *TopProcsCollector) Read(interval time.Duration, output chan lp.CCMetric
lines := strings.Split(string(stdout), "\n")
for i := 1; i < m.config.Num_procs+1; i++ {
name := fmt.Sprintf("topproc%d", i)
y, err := lp.New(name, m.tags, m.meta, map[string]interface{}{"value": string(lines[i])}, time.Now())
y, err := lp.NewMessage(name, m.tags, m.meta, map[string]interface{}{"value": string(lines[i])}, time.Now())
if err == nil {
output <- y
}

View File

@@ -1,8 +1,10 @@
{
"sinks": "sinks.json",
"collectors" : "collectors.json",
"receivers" : "receivers.json",
"router" : "router.json",
"interval": 10,
"duration": 1
"sinks-file": "./sinks.json",
"collectors-file" : "./collectors.json",
"receivers-file" : "./receivers.json",
"router-file" : "./router.json",
"main" : {
"interval": "10s",
"duration": "1s"
}
}

74
docs/building.md Normal file
View File

@@ -0,0 +1,74 @@
# Building the cc-metric-collector
In most cases, a simple `make` in the main folder is enough to get a `cc-metric-collector` binary. It is basically a `go build` but some collectors require additional tasks. There is currently no Golang interface to LIKWID, so it uses `cgo` to create bindings but `cgo` requires the LIKWID header files. Therefore, it checks whether LIKWID is installed and if not it downloads LIKWID and copies the headers.
## System integration
The main configuration settings for system integration are pre-defined in `scripts/cc-metric-collector.config`. The file contains the UNIX user and group used for execution, the PID file location and other settings. Adjust it accordingly and copy it to `/etc/default/cc-metric-collector`
```bash
$ install --mode 644 \
--owner $CC_USER \
--group $CC_GROUP \
scripts/cc-metric-collector.config /etc/default/cc-metric-collector
$ edit /etc/default/cc-metric-collector
```
### SysVinit and similar
If you are using a init system based in `/etc/init.d` daemons, you can use the sample `scripts/cc-metric-collector.init`. It reads the basic configuration from `/etc/default/cc-metric-collector`
```bash
$ install --mode 755 \
--owner $CC_USER \
--group $CC_GROUP \
scripts/cc-metric-collector.init /etc/init.d/cc-metric-collector
```
### Systemd
If you are using `systemd` as init system, you can use the sample systemd service file `scripts/cc-metric-collector.service`, the configuration file `scripts/cc-metric-collector.config`.
```bash
$ install --mode 644 \
--owner $CC_USER \
--group $CC_GROUP \
scripts/cc-metric-collector.service /etc/systemd/system/cc-metric-collector.service
$ systemctl enable cc-metric-collector
```
## Packaging
### RPM
In order to get a RPM packages for cc-metric-collector, just use:
```bash
$ make RPM
```
It uses the RPM SPEC file `scripts/cc-metric-collector.spec` and requires the RPM tools (`rpm` and `rpmspec`) and `git`.
### DEB
In order to get very simple Debian packages for cc-metric-collector, just use:
```bash
$ make DEB
```
It uses the DEB control file `scripts/cc-metric-collector.control` and requires `dpkg-deb`, `awk`, `sed` and `git`. It creates only a binary deb package.
_This option is not well tested and therefore experimental_
### Customizing RPMs or DEB packages
If you want to customize the RPMs or DEB packages for your local system, use the following workflow.
- (if there is already a fork in the private account, delete it and wait until Github realizes the deletion)
- Fork the cc-metric-collector repository (if Github hasn't realized it, it creates a fork named cc-metric-collector2)
- Go to private cc-metric-collector repository and enable Github Actions
- Do changes to the scripts, code, ... Commit and push your changes.
- Tag the new commit with `v0.x.y-<myversion>` (`git tag v0.x.y-<myversion>`)
- Push tags to repository (`git push --tags`)
- Wait until the Release action finishes. It creates fresh RPMs and DEBs in your private repository on the Releases page.

189
docs/configuration.md Normal file
View File

@@ -0,0 +1,189 @@
# Configuring the CC metric collector
The configuration of the CC metric collector consists of five configuration files: one global file and four component related files.
## Global configuration
The global file contains the paths to the other four files and some global options.
```json
{
"sinks": "sinks.json",
"collectors" : "collectors.json",
"receivers" : "receivers.json",
"router" : "router.json",
"interval": "10s",
"duration": "1s"
}
```
Be aware that the paths are relative to the execution folder of the cc-metric-collector binary, so it is recommended to use absolute paths.
## Component configuration
The others are mainly list of of subcomponents: the collectors, the receivers, the router and the sinks. Their role is best shown in a picture:
```mermaid
flowchart LR
subgraph col ["Collectors"]
direction TB
cpustat["cpustat"]
memstat["memstat"]
tempstat["tempstat"]
misc["..."]
end
subgraph Receivers ["Receivers"]
direction TB
nats["NATS"]
httprecv["HTTP"]
miscrecv[...]
end
subgraph calc["Aggregator"]
direction LR
cache["Cache"]
agg["Calculator"]
end
subgraph sinks ["Sinks"]
direction RL
influx["InfluxDB"]
ganglia["Ganglia"]
logger["Logfile"]
miscsink["..."]
end
cpustat --> CollectorManager["CollectorManager"]
memstat --> CollectorManager
tempstat --> CollectorManager
misc --> CollectorManager
nats --> ReceiverManager["ReceiverManager"]
httprecv --> ReceiverManager
miscrecv --> ReceiverManager
CollectorManager --> newrouter["Router"]
ReceiverManager -.-> newrouter
calc -.-> newrouter
newrouter --> SinkManager["SinkManager"]
newrouter -.-> calc
SinkManager --> influx
SinkManager --> ganglia
SinkManager --> logger
SinkManager --> miscsink
```
There are four parts:
- The collectors read data from files, execute commands and call dynamically loaded library function and send it to the router
- The router can process metrics by cacheing and evaluating functions and conditions on them
- The sinks send the metrics to storage backends
- The receivers can be used to receive metrics from other collectors and forward them to the router. They can be used to create a tree-like structure of collectors.
(A maybe better differentiation between collectors and receivers is that the collectors are called periodically while the receivers have their own logic and submit metrics at any time)
### Collectors configuration file
The collectors configuration file tells which metrics should be queried from the system. The metric gathering is logically grouped in so called 'Collectors'. So there are Collectors to read CPU, memory or filesystem statistics. The collectors configuration file is a list of these collectors with collector-specific configurations:
```json
{
"cpustat" : {},
"diskstat": {
"exclude_metrics": [
"disk_total"
]
}
}
```
The first one is the CPU statistics collector without any collector-specific setting. The second one enables disk mount statistics but excludes the metric `disk_total`.
All names and possible collector-specific configuration options can be found [here](../collectors/README.md).
Some collectors might dynamically load shared libraries. In order to enable these collectors, make sure that the shared library path is part of the `LD_LIBRARY_PATH` environment variable.
### Sinks configuration file
The sinks define the output/sending of metrics. The metrics can be forwarded to multiple sinks, even to sinks of the same type. The sinks configuration file is a list of these sinks, each with an individual name.
```json
{
"myinflux" : {
"type" : "influxasync",
"host": "localhost",
"port": "8086",
"organization" : "testorga",
"database" : "testbucket",
"password" : "<my secret JWT>"
},
"companyinflux" : {
"type" : "influxasync",
"host": "companyhost",
"port": "8086",
"organization" : "company",
"database" : "main",
"password" : "<company's secret JWT>"
}
}
```
The above example configuration file defines two sink, both ot type `influxasync`. They are differentiated internally by the names: `myinflux` and `companyinflux`.
All types and possible sink-specific configuration options can be found [here](../sinks/README.md).
Some sinks might dynamically load shared libraries. In order to enable these sinks, make sure that the shared library path is part of the `LD_LIBRARY_PATH` environment variable.
### Router configuration file
The collectors and the sinks are connected through the router. The router forwards the metrics to the sinks but enables some data processing. A common example is to tag all passing metrics like adding `cluster=mycluster`. But also aggregations like "take the average of all 'ipc' metrics" (ipc -> Instructions Per Cycle). Since the configurations of these aggregations can be quite complicated, we refer to the router's [README](../internal/metricRouter/README.md).
A simple router configuration file to start with looks like this:
```json
{
"add_tags" : [
{
"key" : "cluster",
"value" : "mycluster",
"if" : "*"
}
],
"interval_timestamp" : false,
"num_cache_intervals" : 0
}
```
With the `add_tags` section, we tell to attach the `cluster=mycluster` tag to each (`*` metric). The `interval_timestamp` tell the router to not touch the timestamp of metrics. It is possible to send all metrics within an interval with a common time stamp to avoid later alignment issues. The `num_cache_intervals` diables the cache completely. The cache is only required if you want to do complex metric aggregations.
All configuration options can be found [here](../internal/metricRouter/README.md).
### Receivers configuration file
The receivers are a special feature of the CC Metric Collector to enable simpler integration into exising setups. While collectors query data from the local system, the receivers commonly get data from other systems through some network technology like HTTP or NATS. The idea is keep the current setup but send it to a CC Metric Collector which forwards it to the the destination system (if a sink exists for it). For most setups, the receivers are not required and an the receiver config file should contain only an empty JSON map (`{}`).
```json
{
"nats_rack0": {
"type": "nats",
"address" : "nats-server.example.org",
"port" : "4222",
"subject" : "rack0",
},
"nats_rack1": {
"type": "nats",
"address" : "nats-server.example.org",
"port" : "4222",
"subject" : "rack1",
}
}
```
This example configuration creates two receivers with the names `nats_rack0` and `nats_rack1`. While one subscribes to metrics published with the `rack0` subject, the other one subscribes to the `rack0` subject. The NATS server is the same as it manages all subjects in a subnet. (As example, the router could add tags `rack=0` and `rack=1` respectively to the received metrics.)
All types and possible receiver-specific configuration options can be found [here](../receivers/README.md).

23
docs/introduction.md Normal file
View File

@@ -0,0 +1,23 @@
# The ClusterCockpit Project
The ClusterCockpit project is a joined project of computing centers in Europe to set up a cluster monitoring stack for small to mid-sized computing centers under the lead of NHR@FAU.
# The ClusterCockpit Stack
In cluster environment, there are commonly a lot of systems dedicated for computation, backend servers for file systems and frontend servers for the user interaction and cluster control. The ClusterCockpit Stack is mainly used for monitoring the compute systems with some interaction to the frontend servers. It consists of multiple components:
- cc-metric-collector: Monitor resource usage on the compute systems
- cc-metric-store: In-memory database
- cc-backend & cc-frontend: The web-based visualizer
# CC Metric Collector
The CC Metric Collector project was started to provide a useful set of metrics for HPC and data science related compute systems. It runs as a system daemon and gathers system data periodically to forward the metrics to one or more databases. One of the provided backends can be used for the cc-metric-store but many others exist like InfluxDB time-series databases, the Ganglia Monitoring System or the Prometheus Monitoring System.
The data is gathered by so-called "Collectors", forwarded to an internal router for on-the-fly manipulation (tagging, aggregation, ...) which pushes the metrics to the different metric writers called "Sinks". There is a forth component, the "Receivers", which receive data through some networking system like a HTTP server at any time.
# CC Metric Store
The CC Metric Store is a data management system with short-term in-memory and long-term file-base metric storage.
# CC Backend and CC Frontend
The CC Backend and Frontend form together the web interface for ClusterCockpit.

49
go.mod
View File

@@ -1,19 +1,48 @@
module github.com/ClusterCockpit/cc-metric-collector
go 1.16
go 1.23.4
toolchain go1.23.7
require (
github.com/NVIDIA/go-nvml v0.11.1-0
github.com/influxdata/influxdb-client-go/v2 v2.7.0
github.com/ClusterCockpit/cc-lib v0.1.1
github.com/ClusterCockpit/go-rocm-smi v0.3.0
github.com/NVIDIA/go-nvml v0.12.0-2
github.com/PaesslerAG/gval v1.2.2
github.com/fsnotify/fsnotify v1.7.0
github.com/gorilla/mux v1.8.1
github.com/influxdata/influxdb-client-go/v2 v2.14.0
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf
github.com/nats-io/nats.go v1.13.1-0.20211122170419-d7c1d78a50fc
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9
gopkg.in/Knetic/govaluate.v2 v2.3.0
github.com/influxdata/line-protocol/v2 v2.2.1
github.com/nats-io/nats.go v1.39.0
github.com/prometheus/client_golang v1.20.5
github.com/stmcginnis/gofish v0.15.0
github.com/tklauser/go-sysconf v0.3.13
golang.design/x/thread v0.0.0-20210122121316-335e9adffdf1
golang.org/x/exp v0.0.0-20250215185904-eff6e970281f
golang.org/x/sys v0.30.0
)
require (
github.com/PaesslerAG/gval v1.1.2
github.com/golang/protobuf v1.5.2 // indirect
github.com/nats-io/nats-server/v2 v2.7.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
github.com/ClusterCockpit/cc-backend v1.4.2 // indirect
github.com/ClusterCockpit/cc-units v0.4.0 // indirect
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/expr-lang/expr v1.17.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nats-io/nkeys v0.4.9 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/oapi-codegen/runtime v1.1.1 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.55.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1 // indirect
github.com/shopspring/decimal v1.3.1 // indirect
github.com/tklauser/numcpus v0.7.0 // indirect
golang.org/x/crypto v0.35.0 // indirect
golang.org/x/net v0.36.0 // indirect
google.golang.org/protobuf v1.35.2 // indirect
)

250
go.sum
View File

@@ -1,145 +1,139 @@
github.com/NVIDIA/go-nvml v0.11.1-0 h1:XHSz3zZKC4NCP2ja1rI7++DXFhA+uDhdYa3MykCTGHY=
github.com/NVIDIA/go-nvml v0.11.1-0/go.mod h1:hy7HYeQy335x6nEss0Ne3PYqleRa6Ct+VKD9RQ4nyFs=
github.com/PaesslerAG/gval v1.1.2 h1:EROKxV4/fAKWb0Qoj7NOxmHZA7gcpjOV9XgiRZMRCUU=
github.com/PaesslerAG/gval v1.1.2/go.mod h1:Fa8gfkCmUsELXgayr8sfL/sw+VzCVoa03dcOcR/if2w=
github.com/ClusterCockpit/cc-backend v1.4.2 h1:kTOzqkh9N0564N9nqQThnSs7TAfg8RLgvSm00e5HtIc=
github.com/ClusterCockpit/cc-backend v1.4.2/go.mod h1:g8TNHXe4AXej26snu2//jO3mUF980elT93iV/k11O/c=
github.com/ClusterCockpit/cc-lib v0.1.0-beta.1 h1:dz9j0g2cod8+SMDjuoIY6ISpiHHeekhX6yQaeiwiwJw=
github.com/ClusterCockpit/cc-lib v0.1.0-beta.1/go.mod h1:kXMskla1i5ZSfXW0vVRIHgGeXMU5zu2PzYOYnUaOr80=
github.com/ClusterCockpit/cc-lib v0.1.1 h1:AXZWYUzgTaE/WdxLNSWPR7FJoA5WlzvYZxw4gIw3gNw=
github.com/ClusterCockpit/cc-lib v0.1.1/go.mod h1:SHKcWW/+kN+pcofAtHJFxvmx1FV0VIJuQv5PuT0HDcc=
github.com/ClusterCockpit/cc-units v0.4.0 h1:zP5DOu99GmErW0tCDf0gcLrlWt42RQ9dpoONEOh4cI0=
github.com/ClusterCockpit/cc-units v0.4.0/go.mod h1:3S3PAhAayS3pbgcT4q9Vn9VJw22Op51X0YimtG77zBw=
github.com/ClusterCockpit/go-rocm-smi v0.3.0 h1:1qZnSpG7/NyLtc7AjqnUL9Jb8xtqG1nMVgp69rJfaR8=
github.com/ClusterCockpit/go-rocm-smi v0.3.0/go.mod h1:+I3UMeX3OlizXDf1WpGD43W4KGZZGVSGmny6rTeOnWA=
github.com/NVIDIA/go-nvml v0.11.6-0/go.mod h1:hy7HYeQy335x6nEss0Ne3PYqleRa6Ct+VKD9RQ4nyFs=
github.com/NVIDIA/go-nvml v0.12.0-2 h1:Sg239yy7jmopu/cuvYauoMj9fOpcGMngxVxxS1EBXeY=
github.com/NVIDIA/go-nvml v0.12.0-2/go.mod h1:7ruy85eOM73muOc/I37euONSwEyFqZsv5ED9AogD4G0=
github.com/PaesslerAG/gval v1.2.2 h1:Y7iBzhgE09IGTt5QgGQ2IdaYYYOU134YGHBThD+wm9E=
github.com/PaesslerAG/gval v1.2.2/go.mod h1:XRFLwvmkTEdYziLdaCeCa5ImcGVrfQbeNUbVR+C6xac=
github.com/PaesslerAG/jsonpath v0.1.0 h1:gADYeifvlqK3R3i2cR5B4DGgxLXIPb3TRTH1mGi0jPI=
github.com/PaesslerAG/jsonpath v0.1.0/go.mod h1:4BzmtoM/PI8fPO4aQGIusjGxGir2BzcV0grWtFzq1Y8=
github.com/cyberdelia/templates v0.0.0-20141128023046-ca7fffd4298c/go.mod h1:GyV+0YP4qX0UQ7r2MoYZ+AvYDp12OF5yg4q8rGnyNh4=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/apapsch/go-jsonmerge/v2 v2.0.0 h1:axGnT1gRIfimI7gJifB699GoE/oq+F2MU7Dml6nw9rQ=
github.com/apapsch/go-jsonmerge/v2 v2.0.0/go.mod h1:lvDnEdqiQrp0O42VQGgmlKpxL1AP2+08jFMw88y4klk=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bmatcuk/doublestar v1.1.1/go.mod h1:UD6OnuiIn0yFxxA2le/rnRU1G4RaI4UvFv1sNto9p6w=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deepmap/oapi-codegen v1.8.2 h1:SegyeYGcdi0jLLrpbCMoJxnUUn8GBXHsvr4rbzjuhfU=
github.com/deepmap/oapi-codegen v1.8.2/go.mod h1:YLgSKSDv/bZQB7N4ws6luhozi3cEdRktEqrX88CvjIw=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/getkin/kin-openapi v0.61.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-chi/chi/v5 v5.0.0/go.mod h1:BBug9lr0cqtdAhsu6R4AAdvufI0/XBzAQSsUqJpoZOs=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangci/lint-1 v0.0.0-20181222135242-d2cdd8c08219/go.mod h1:/X8TswGSh1pIozq4ZwCfxS0WA5JGXguxk94ar/4c87Y=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/expr-lang/expr v1.16.9 h1:WUAzmR0JNI9JCiF0/ewwHB1gmcGw5wW7nWt8gc6PpCI=
github.com/expr-lang/expr v1.16.9/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/expr-lang/expr v1.17.0 h1:+vpszOyzKLQXC9VF+wA8cVA0tlA984/Wabc/1hF9Whg=
github.com/expr-lang/expr v1.17.0/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/frankban/quicktest v1.11.0/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.11.2/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.13.0 h1:yNZif1OkDfNoDfb9zZa9aXIpejNR4F23Wely0c+Qdqk=
github.com/frankban/quicktest v1.13.0/go.mod h1:qLE0fzW0VuyUAJgPU19zByoIr0HtCHN/r/VLSOOIySU=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/influxdata/influxdb-client-go/v2 v2.7.0 h1:QgP5mlBE9sGnzplpnf96pr+p7uqlIlL4W2GAP3n+XZg=
github.com/influxdata/influxdb-client-go/v2 v2.7.0/go.mod h1:Y/0W1+TZir7ypoQZYd2IrnVOKB3Tq6oegAQeSVN/+EU=
github.com/influxdata/line-protocol v0.0.0-20200327222509-2487e7298839/go.mod h1:xaLFMmpvUxqXtVkUJfg9QmT88cDaCJ3ZKgdZ78oO8Qo=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/influxdata/influxdb-client-go/v2 v2.14.0 h1:AjbBfJuq+QoaXNcrova8smSjwJdUHnwvfjMF71M1iI4=
github.com/influxdata/influxdb-client-go/v2 v2.14.0/go.mod h1:Ahpm3QXKMJslpXl3IftVLVezreAUtBOTZssDrjZEFHI=
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf h1:7JTmneyiNEwVBOHSjoMxiWAqB992atOeepeFYegn5RU=
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf/go.mod h1:xaLFMmpvUxqXtVkUJfg9QmT88cDaCJ3ZKgdZ78oO8Qo=
github.com/klauspost/compress v1.13.4 h1:0zhec2I8zGnjWcKyLl6i3gPqKANCCn5e9xmviEEeX6s=
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/influxdata/line-protocol-corpus v0.0.0-20210519164801-ca6fa5da0184/go.mod h1:03nmhxzZ7Xk2pdG+lmMd7mHDfeVOYFyhOgwO61qWU98=
github.com/influxdata/line-protocol-corpus v0.0.0-20210922080147-aa28ccfb8937 h1:MHJNQ+p99hFATQm6ORoLmpUCF7ovjwEFshs/NHzAbig=
github.com/influxdata/line-protocol-corpus v0.0.0-20210922080147-aa28ccfb8937/go.mod h1:BKR9c0uHSmRgM/se9JhFHtTT7JTO67X23MtKMHtZcpo=
github.com/influxdata/line-protocol/v2 v2.0.0-20210312151457-c52fdecb625a/go.mod h1:6+9Xt5Sq1rWx+glMgxhcg2c0DUaehK+5TDcPZ76GypY=
github.com/influxdata/line-protocol/v2 v2.1.0/go.mod h1:QKw43hdUBg3GTk2iC3iyCxksNj7PX9aUSeYOYE/ceHY=
github.com/influxdata/line-protocol/v2 v2.2.1 h1:EAPkqJ9Km4uAxtMRgUubJyqAr6zgWM0dznKMLRauQRE=
github.com/influxdata/line-protocol/v2 v2.2.1/go.mod h1:DmB3Cnh+3oxmG6LOBIxce4oaL4CPj3OmMPgvauXh+tM=
github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/labstack/echo/v4 v4.2.1/go.mod h1:AA49e0DZ8kk5jTOOCKNuPR6oTnBS0dYiM4FW1e6jwpg=
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/matryer/moq v0.0.0-20190312154309-6cfb0558e1bd/go.mod h1:9ELz6aaclSIGnZBoaSLZ3NAl1VTufbOrXBPvtcy6WiQ=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/minio/highwayhash v1.0.1 h1:dZ6IIu8Z14VlC0VpfKofAhCy74wu/Qb5gcn52yWoz/0=
github.com/minio/highwayhash v1.0.1/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY=
github.com/nats-io/jwt/v2 v2.2.1-0.20220113022732-58e87895b296 h1:vU9tpM3apjYlLLeY23zRWJ9Zktr5jp+mloR942LEOpY=
github.com/nats-io/jwt/v2 v2.2.1-0.20220113022732-58e87895b296/go.mod h1:0tqz9Hlu6bCBFLWAASKhE5vUA4c24L9KPUUgvwumE/k=
github.com/nats-io/nats-server/v2 v2.7.0 h1:UpqcAM93FI7AHlCyI2FD5QcV3QuHNCauQF2LBVU0238=
github.com/nats-io/nats-server/v2 v2.7.0/go.mod h1:cjxtMhZsZovK1XS2iiapCduR8HuqB/YpFamL0qntIcw=
github.com/nats-io/nats.go v1.13.1-0.20211122170419-d7c1d78a50fc h1:SHr4MUUZJ/fAC0uSm2OzWOJYsHpapmR86mpw7q1qPXU=
github.com/nats-io/nats.go v1.13.1-0.20211122170419-d7c1d78a50fc/go.mod h1:BPko4oXsySz4aSWeFgOHLZs3G4Jq4ZAyE6/zMCxRT6w=
github.com/nats-io/nkeys v0.3.0 h1:cgM5tL53EvYRU+2YLXIK0G2mJtK12Ft9oeooSZMA2G8=
github.com/nats-io/nkeys v0.3.0/go.mod h1:gvUNGjVcM2IPr5rCsRsC6Wb3Hr2CQAm08dsxtV6A5y4=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nats-io/nats.go v1.39.0 h1:2/yg2JQjiYYKLwDuBzV0FbB2sIV+eFNkEevlRi4n9lI=
github.com/nats-io/nats.go v1.39.0/go.mod h1:MgRb8oOdigA6cYpEPhXJuRVH6UE/V4jblJ2jQ27IXYM=
github.com/nats-io/nkeys v0.4.9 h1:qe9Faq2Gxwi6RZnZMXfmGMZkg3afLLOtrU+gDZJ35b0=
github.com/nats-io/nkeys v0.4.9/go.mod h1:jcMqs+FLG+W5YO36OX6wFIFcmpdAns+w1Wm6D3I/evE=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/oapi-codegen/runtime v1.1.1 h1:EXLHh0DXIJnWhdRPN2w4MXAzFyE4CskzhNLUmtpMYro=
github.com/oapi-codegen/runtime v1.1.1/go.mod h1:SK9X900oXmPWilYR5/WKPzt3Kqxn/uS/+lbpREv+eCg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1 h1:lZUw3E0/J3roVtGQ+SCrUrg3ON6NgVqpn3+iol9aGu4=
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1/go.mod h1:uToXkOrWAZ6/Oc07xWQrPOhJotwFIyu2bBVN41fcDUY=
github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8=
github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/spkg/bom v0.0.0-20160624110644-59b7046e48ad/go.mod h1:qLr4V1qq6nMqFKkMo8ZTx3f+BZEkzsRUY10Xsm2mwU0=
github.com/stmcginnis/gofish v0.15.0 h1:8TG41+lvJk/0Nf8CIIYErxbMlQUy80W0JFRZP3Ld82A=
github.com/stmcginnis/gofish v0.15.0/go.mod h1:BLDSFTp8pDlf/xDbLZa+F7f7eW0E/CHCboggsu8CznI=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce h1:Roh6XWxHFKrPgC/EQhVubSAGQ6Ozk6IdxHSzt1mR0EI=
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190130150945-aca44879d564/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200826173525-f9321e4c35a6/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220111092808-5a964db01320/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 h1:GZokNIeuVkl3aZHJchRrr13WCsols02MLUcz1U9is6M=
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tklauser/go-sysconf v0.3.13 h1:GBUpcahXSpR2xN01jhkNAbTLRk2Yzgggk8IM08lq3r4=
github.com/tklauser/go-sysconf v0.3.13/go.mod h1:zwleP4Q4OehZHGn4CYZDipCgg9usW5IJePewFCGVEa0=
github.com/tklauser/numcpus v0.7.0 h1:yjuerZP127QG9m5Zh/mSO4wqurYil27tHrqwRoRjpr4=
github.com/tklauser/numcpus v0.7.0/go.mod h1:bb6dMVcj8A42tSE7i32fsIUCbQNllK5iDguyOZRUzAY=
golang.design/x/thread v0.0.0-20210122121316-335e9adffdf1 h1:P7S/GeHBAFEZIYp0ePPs2kHXoazz8q2KsyxHyQVGCJg=
golang.design/x/thread v0.0.0-20210122121316-335e9adffdf1/go.mod h1:9CWpnTUmlQkfdpdutA1nNf4iE5lAVt3QZOu0Z6hahBE=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.35.0 h1:b15kiHdrGCHrP6LvwaQ3c03kgNhhiMgvlhxHQhmg2Xs=
golang.org/x/crypto v0.35.0/go.mod h1:dy7dXNW32cAb/6/PRuTNsix8T+vJAqvuIy5Bli/x0YQ=
golang.org/x/exp v0.0.0-20250215185904-eff6e970281f h1:oFMYAjX0867ZD2jcNiLBrI9BdpmEkvPyi5YrBGXbamg=
golang.org/x/exp v0.0.0-20250215185904-eff6e970281f/go.mod h1:BHOTPb3L19zxehTsLoJXVaTktb06DFgmdW6Wb9s8jqk=
golang.org/x/net v0.31.0 h1:68CPQngjLL0r2AlUKiSxtQFKvzRVbnzLwMUn5SzcLHo=
golang.org/x/net v0.31.0/go.mod h1:P4fl1q7dY2hnZFxEk4pPSkDHF+QqjitcnDjUQyMM+pM=
golang.org/x/net v0.36.0 h1:vWF2fRbw4qslQsQzgFqZff+BItCvGFQqKzKIzx1rmoA=
golang.org/x/net v0.36.0/go.mod h1:bFmbeoIPfrw4sMHNhb4J9f6+tPziuGjq7Jk/38fxi1I=
golang.org/x/sys v0.0.0-20210122093101-04d7465088b8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/Knetic/govaluate.v2 v2.3.0 h1:naJVc9CZlWA8rC8f5mvECJD7jreTrn7FvGXjBthkHJQ=
gopkg.in/Knetic/govaluate.v2 v2.3.0/go.mod h1:NW0gr10J8s7aNghEg6uhdxiEaBvc0+8VgJjVViHUKp4=
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,113 +0,0 @@
package cclogger
import (
"fmt"
"log"
"os"
"runtime"
)
var (
globalDebug = false
stdout = os.Stdout
stderr = os.Stderr
debugLog *log.Logger = nil
infoLog *log.Logger = nil
errorLog *log.Logger = nil
warnLog *log.Logger = nil
defaultLog *log.Logger = nil
)
func initLogger() {
if debugLog == nil {
debugLog = log.New(stderr, "DEBUG ", log.LstdFlags)
}
if infoLog == nil {
infoLog = log.New(stdout, "INFO ", log.LstdFlags)
}
if errorLog == nil {
errorLog = log.New(stderr, "ERROR ", log.LstdFlags)
}
if warnLog == nil {
warnLog = log.New(stderr, "WARN ", log.LstdFlags)
}
if defaultLog == nil {
defaultLog = log.New(stdout, "", log.LstdFlags)
}
}
func Print(e ...interface{}) {
initLogger()
defaultLog.Print(e...)
}
func ComponentPrint(component string, e ...interface{}) {
initLogger()
defaultLog.Print(fmt.Sprintf("[%s] ", component), e)
}
func Info(e ...interface{}) {
initLogger()
infoLog.Print(e...)
}
func ComponentInfo(component string, e ...interface{}) {
initLogger()
infoLog.Print(fmt.Sprintf("[%s] ", component), e)
}
func Debug(e ...interface{}) {
initLogger()
if globalDebug {
debugLog.Print(e...)
}
}
func ComponentDebug(component string, e ...interface{}) {
initLogger()
if globalDebug && debugLog != nil {
//CCComponentPrint(debugLog, component, e)
debugLog.Print(fmt.Sprintf("[%s] ", component), e)
}
}
func Error(e ...interface{}) {
initLogger()
_, fn, line, _ := runtime.Caller(1)
errorLog.Print(fmt.Sprintf("[%s:%d] ", fn, line), e)
}
func ComponentError(component string, e ...interface{}) {
initLogger()
_, fn, line, _ := runtime.Caller(1)
errorLog.Print(fmt.Sprintf("[%s|%s:%d] ", component, fn, line), e)
}
func SetDebug() {
globalDebug = true
initLogger()
}
func SetOutput(filename string) {
if filename == "stderr" {
if stderr != os.Stderr && stderr != os.Stdout {
stderr.Close()
}
stderr = os.Stderr
} else if filename == "stdout" {
if stderr != os.Stderr && stderr != os.Stdout {
stderr.Close()
}
stderr = os.Stdout
} else {
file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0600)
if err == nil {
defer file.Close()
stderr = file
}
}
debugLog = nil
errorLog = nil
warnLog = nil
initLogger()
}

View File

@@ -1,32 +0,0 @@
# ClusterCockpit metrics
As described in the [ClusterCockpit specifications](https://github.com/ClusterCockpit/cc-specifications), the whole ClusterCockpit stack uses metrics in the InfluxDB line protocol format. This is also the input and output format for the ClusterCockpit Metric Collector but internally it uses an extended format while processing, named CCMetric.
It is basically a copy of the [InfluxDB line protocol](https://github.com/influxdata/line-protocol) `MutableMetric` interface with one extension. Besides the tags and fields, it contains a list of meta information (re-using the `Tag` structure of the original protocol):
```golang
type ccMetric struct {
name string // same as
tags []*influx.Tag // original
fields []*influx.Field // Influx
tm time.Time // line-protocol
meta []*influx.Tag
}
type CCMetric interface {
influx.MutableMetric // the same functions as defined by influx.MutableMetric
RemoveTag(key string) // this is not published by the original influx.MutableMetric
Meta() map[string]string
MetaList() []*inlux.Tag
AddMeta(key, value string)
HasMeta(key string) bool
GetMeta(key string) (string, bool)
RemoveMeta(key string)
}
```
The `CCMetric` interface provides the same functions as the `MutableMetric` like `{Add, Remove, Has}{Tag, Field}` and additionally provides `{Add, Remove, Has}Meta`.
The InfluxDB protocol creates a new metric with `influx.New(name, tags, fields, time)` while CCMetric uses `ccMetric.New(name, tags, meta, fields, time)` where `tags` and `meta` are both of type `map[string]string`.
You can copy a CCMetric with `FromMetric(other CCMetric) CCMetric`. If you get an `influx.Metric` from a function, like the line protocol parser, you can use `FromInfluxMetric(other influx.Metric) CCMetric` to get a CCMetric out of it (see `NatsReceiver` for an example).

View File

@@ -1,371 +0,0 @@
package ccmetric
import (
"fmt"
"time"
influxdb2 "github.com/influxdata/influxdb-client-go/v2"
write "github.com/influxdata/influxdb-client-go/v2/api/write"
lp "github.com/influxdata/line-protocol" // MIT license
)
// Most functions are derived from github.com/influxdata/line-protocol/metric.go
// The metric type is extended with an extra meta information list re-using the Tag
// type.
//
// See: https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/
type ccMetric struct {
name string // Measurement name
meta map[string]string // map of meta data tags
tags map[string]string // map of of tags
fields map[string]interface{} // map of of fields
tm time.Time // timestamp
}
// ccMetric access functions
type CCMetric interface {
ToPoint(metaAsTags bool) *write.Point // Generate influxDB point for data type ccMetric
ToLineProtocol(metaAsTags bool) string // Generate influxDB line protocol for data type ccMetric
Name() string // Get metric name
SetName(name string) // Set metric name
Time() time.Time // Get timestamp
SetTime(t time.Time) // Set timestamp
Tags() map[string]string // Map of tags
AddTag(key, value string) // Add a tag
GetTag(key string) (value string, ok bool) // Get a tag by its key
HasTag(key string) (ok bool) // Check if a tag key is present
RemoveTag(key string) // Remove a tag by its key
Meta() map[string]string // Map of meta data tags
AddMeta(key, value string) // Add a meta data tag
GetMeta(key string) (value string, ok bool) // Get a meta data tab addressed by its key
HasMeta(key string) (ok bool) // Check if a meta data key is present
RemoveMeta(key string) // Remove a meta data tag by its key
Fields() map[string]interface{} // Map of fields
AddField(key string, value interface{}) // Add a field
GetField(key string) (value interface{}, ok bool) // Get a field addressed by its key
HasField(key string) (ok bool) // Check if a field key is present
RemoveField(key string) // Remove a field addressed by its key
}
// String implements the stringer interface for data type ccMetric
func (m *ccMetric) String() string {
return fmt.Sprintf(
"Name: %s, Tags: %+v, Meta: %+v, fields: %+v, Timestamp: %d",
m.name, m.tags, m.meta, m.fields, m.tm.UnixNano(),
)
}
// ToLineProtocol generates influxDB line protocol for data type ccMetric
func (m *ccMetric) ToPoint(metaAsTags bool) (p *write.Point) {
if !metaAsTags {
p = influxdb2.NewPoint(m.name, m.tags, m.fields, m.tm)
} else {
tags := make(map[string]string, len(m.tags)+len(m.meta))
for key, value := range m.tags {
tags[key] = value
}
for key, value := range m.meta {
tags[key] = value
}
p = influxdb2.NewPoint(m.name, tags, m.fields, m.tm)
}
return
}
// ToLineProtocol generates influxDB line protocol for data type ccMetric
func (m *ccMetric) ToLineProtocol(metaAsTags bool) string {
return write.PointToLineProtocol(
m.ToPoint(metaAsTags),
time.Nanosecond,
)
}
// Name returns the measurement name
func (m *ccMetric) Name() string {
return m.name
}
// SetName sets the measurement name
func (m *ccMetric) SetName(name string) {
m.name = name
}
// Time returns timestamp
func (m *ccMetric) Time() time.Time {
return m.tm
}
// SetTime sets the timestamp
func (m *ccMetric) SetTime(t time.Time) {
m.tm = t
}
// Tags returns the the list of tags as key-value-mapping
func (m *ccMetric) Tags() map[string]string {
return m.tags
}
// AddTag adds a tag (consisting of key and value) to the map of tags
func (m *ccMetric) AddTag(key, value string) {
m.tags[key] = value
}
// GetTag returns the tag with tag's key equal to <key>
func (m *ccMetric) GetTag(key string) (string, bool) {
value, ok := m.tags[key]
return value, ok
}
// HasTag checks if a tag with key equal to <key> is present in the list of tags
func (m *ccMetric) HasTag(key string) bool {
_, ok := m.tags[key]
return ok
}
// RemoveTag removes the tag with tag's key equal to <key>
func (m *ccMetric) RemoveTag(key string) {
delete(m.tags, key)
}
// Meta returns the meta data tags as key-value mapping
func (m *ccMetric) Meta() map[string]string {
return m.meta
}
// AddMeta adds a meta data tag (consisting of key and value) to the map of meta data tags
func (m *ccMetric) AddMeta(key, value string) {
m.meta[key] = value
}
// GetMeta returns the meta data tag with meta data's key equal to <key>
func (m *ccMetric) GetMeta(key string) (string, bool) {
value, ok := m.meta[key]
return value, ok
}
// HasMeta checks if a meta data tag with meta data's key equal to <key> is present in the map of meta data tags
func (m *ccMetric) HasMeta(key string) bool {
_, ok := m.meta[key]
return ok
}
// RemoveMeta removes the meta data tag with tag's key equal to <key>
func (m *ccMetric) RemoveMeta(key string) {
delete(m.meta, key)
}
// Fields returns the list of fields as key-value-mapping
func (m *ccMetric) Fields() map[string]interface{} {
return m.fields
}
// AddField adds a field (consisting of key and value) to the map of fields
func (m *ccMetric) AddField(key string, value interface{}) {
m.fields[key] = value
}
// GetField returns the field with field's key equal to <key>
func (m *ccMetric) GetField(key string) (interface{}, bool) {
v, ok := m.fields[key]
return v, ok
}
// HasField checks if a field with field's key equal to <key> is present in the map of fields
func (m *ccMetric) HasField(key string) bool {
_, ok := m.fields[key]
return ok
}
// RemoveField removes the field with field's key equal to <key>
// from the map of fields
func (m *ccMetric) RemoveField(key string) {
delete(m.fields, key)
}
// New creates a new measurement point
func New(
name string,
tags map[string]string,
meta map[string]string,
fields map[string]interface{},
tm time.Time,
) (CCMetric, error) {
m := &ccMetric{
name: name,
tags: make(map[string]string, len(tags)),
meta: make(map[string]string, len(meta)),
fields: make(map[string]interface{}, len(fields)),
tm: tm,
}
// deep copy tags, meta data tags and fields
for k, v := range tags {
m.tags[k] = v
}
for k, v := range meta {
m.meta[k] = v
}
for k, v := range fields {
v := convertField(v)
if v == nil {
continue
}
m.fields[k] = v
}
return m, nil
}
// FromMetric copies the metric <other>
func FromMetric(other ccMetric) CCMetric {
m := &ccMetric{
name: other.Name(),
tags: make(map[string]string, len(other.tags)),
meta: make(map[string]string, len(other.meta)),
fields: make(map[string]interface{}, len(other.fields)),
tm: other.Time(),
}
// deep copy tags, meta data tags and fields
for key, value := range other.tags {
m.tags[key] = value
}
for key, value := range other.meta {
m.meta[key] = value
}
for key, value := range other.fields {
m.fields[key] = value
}
return m
}
// FromInfluxMetric copies the influxDB line protocol metric <other>
func FromInfluxMetric(other lp.Metric) CCMetric {
m := &ccMetric{
name: other.Name(),
tags: make(map[string]string),
meta: make(map[string]string),
fields: make(map[string]interface{}),
tm: other.Time(),
}
// deep copy tags and fields
for _, otherTag := range other.TagList() {
m.tags[otherTag.Key] = otherTag.Value
}
for _, otherField := range other.FieldList() {
m.fields[otherField.Key] = otherField.Value
}
return m
}
// convertField converts data types of fields by the following schemata:
// *float32, *float64, float32, float64 -> float64
// *int, *int8, *int16, *int32, *int64, int, int8, int16, int32, int64 -> int64
// *uint, *uint8, *uint16, *uint32, *uint64, uint, uint8, uint16, uint32, uint64 -> uint64
// *[]byte, *string, []byte, string -> string
// *bool, bool -> bool
func convertField(v interface{}) interface{} {
switch v := v.(type) {
case float64:
return v
case int64:
return v
case string:
return v
case bool:
return v
case int:
return int64(v)
case uint:
return uint64(v)
case uint64:
return uint64(v)
case []byte:
return string(v)
case int32:
return int64(v)
case int16:
return int64(v)
case int8:
return int64(v)
case uint32:
return uint64(v)
case uint16:
return uint64(v)
case uint8:
return uint64(v)
case float32:
return float64(v)
case *float64:
if v != nil {
return *v
}
case *int64:
if v != nil {
return *v
}
case *string:
if v != nil {
return *v
}
case *bool:
if v != nil {
return *v
}
case *int:
if v != nil {
return int64(*v)
}
case *uint:
if v != nil {
return uint64(*v)
}
case *uint64:
if v != nil {
return uint64(*v)
}
case *[]byte:
if v != nil {
return string(*v)
}
case *int32:
if v != nil {
return int64(*v)
}
case *int16:
if v != nil {
return int64(*v)
}
case *int8:
if v != nil {
return int64(*v)
}
case *uint32:
if v != nil {
return uint64(*v)
}
case *uint16:
if v != nil {
return uint64(*v)
}
case *uint8:
if v != nil {
return uint64(*v)
}
case *float32:
if v != nil {
return float64(*v)
}
default:
return nil
}
return nil
}

View File

@@ -1,422 +0,0 @@
package ccTopology
import (
"fmt"
"io/ioutil"
"log"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
cclogger "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
)
const SYSFS_NUMABASE = `/sys/devices/system/node`
const SYSFS_CPUBASE = `/sys/devices/system/cpu`
const PROCFS_CPUINFO = `/proc/cpuinfo`
// intArrayContains scans an array of ints if the value str is present in the array
// If the specified value is found, the corresponding array index is returned.
// The bool value is used to signal success or failure
func intArrayContains(array []int, str int) (int, bool) {
for i, a := range array {
if a == str {
return i, true
}
}
return -1, false
}
func fileToInt(path string) int {
buffer, err := ioutil.ReadFile(path)
if err != nil {
log.Print(err)
cclogger.ComponentError("ccTopology", "Reading", path, ":", err.Error())
return -1
}
sbuffer := strings.Replace(string(buffer), "\n", "", -1)
var id int64
//_, err = fmt.Scanf("%d", sbuffer, &id)
id, err = strconv.ParseInt(sbuffer, 10, 32)
if err != nil {
cclogger.ComponentError("ccTopology", "Parsing", path, ":", sbuffer, err.Error())
return -1
}
return int(id)
}
func SocketList() []int {
buffer, err := ioutil.ReadFile(string(PROCFS_CPUINFO))
if err != nil {
log.Print(err)
return nil
}
ll := strings.Split(string(buffer), "\n")
var packs []int
for _, line := range ll {
if strings.HasPrefix(line, "physical id") {
lv := strings.Fields(line)
id, err := strconv.ParseInt(lv[3], 10, 32)
if err != nil {
log.Print(err)
return packs
}
_, found := intArrayContains(packs, int(id))
if !found {
packs = append(packs, int(id))
}
}
}
return packs
}
func CpuList() []int {
buffer, err := ioutil.ReadFile(string(PROCFS_CPUINFO))
if err != nil {
log.Print(err)
return nil
}
ll := strings.Split(string(buffer), "\n")
cpulist := make([]int, 0)
for _, line := range ll {
if strings.HasPrefix(line, "processor") {
lv := strings.Fields(line)
id, err := strconv.ParseInt(lv[2], 10, 32)
if err != nil {
log.Print(err)
return cpulist
}
_, found := intArrayContains(cpulist, int(id))
if !found {
cpulist = append(cpulist, int(id))
}
}
}
return cpulist
}
func CoreList() []int {
buffer, err := ioutil.ReadFile(string(PROCFS_CPUINFO))
if err != nil {
log.Print(err)
return nil
}
ll := strings.Split(string(buffer), "\n")
corelist := make([]int, 0)
for _, line := range ll {
if strings.HasPrefix(line, "core id") {
lv := strings.Fields(line)
id, err := strconv.ParseInt(lv[3], 10, 32)
if err != nil {
log.Print(err)
return corelist
}
_, found := intArrayContains(corelist, int(id))
if !found {
corelist = append(corelist, int(id))
}
}
}
return corelist
}
func NumaNodeList() []int {
numaList := make([]int, 0)
globPath := filepath.Join(string(SYSFS_NUMABASE), "node*")
regexPath := filepath.Join(string(SYSFS_NUMABASE), "node(\\d+)")
regex := regexp.MustCompile(regexPath)
files, err := filepath.Glob(globPath)
if err != nil {
cclogger.ComponentError("CCTopology", "NumaNodeList", err.Error())
}
for _, f := range files {
if !regex.MatchString(f) {
continue
}
finfo, err := os.Lstat(f)
if err != nil {
continue
}
if !finfo.IsDir() {
continue
}
matches := regex.FindStringSubmatch(f)
if len(matches) == 2 {
id, err := strconv.Atoi(matches[1])
if err == nil {
if _, found := intArrayContains(numaList, id); !found {
numaList = append(numaList, id)
}
}
}
}
return numaList
}
func DieList() []int {
cpulist := CpuList()
dielist := make([]int, 0)
for _, c := range cpulist {
diepath := filepath.Join(string(SYSFS_CPUBASE), fmt.Sprintf("cpu%d", c), "topology/die_id")
dieid := fileToInt(diepath)
if dieid > 0 {
_, found := intArrayContains(dielist, int(dieid))
if !found {
dielist = append(dielist, int(dieid))
}
}
}
return dielist
}
type CpuEntry struct {
Cpuid int
SMT int
Core int
Socket int
Numadomain int
Die int
}
func CpuData() []CpuEntry {
fileToInt := func(path string) int {
buffer, err := ioutil.ReadFile(path)
if err != nil {
log.Print(err)
//cclogger.ComponentError("ccTopology", "Reading", path, ":", err.Error())
return -1
}
sbuffer := strings.Replace(string(buffer), "\n", "", -1)
var id int64
//_, err = fmt.Scanf("%d", sbuffer, &id)
id, err = strconv.ParseInt(sbuffer, 10, 32)
if err != nil {
cclogger.ComponentError("ccTopology", "Parsing", path, ":", sbuffer, err.Error())
return -1
}
return int(id)
}
getCore := func(basepath string) int {
return fileToInt(fmt.Sprintf("%s/core_id", basepath))
}
getSocket := func(basepath string) int {
return fileToInt(fmt.Sprintf("%s/physical_package_id", basepath))
}
getDie := func(basepath string) int {
return fileToInt(fmt.Sprintf("%s/die_id", basepath))
}
getSMT := func(cpuid int, basepath string) int {
buffer, err := ioutil.ReadFile(fmt.Sprintf("%s/thread_siblings_list", basepath))
if err != nil {
cclogger.ComponentError("CCTopology", "CpuData:getSMT", err.Error())
}
threadlist := make([]int, 0)
sbuffer := strings.Replace(string(buffer), "\n", "", -1)
for _, x := range strings.Split(sbuffer, ",") {
id, err := strconv.ParseInt(x, 10, 32)
if err != nil {
cclogger.ComponentError("CCTopology", "CpuData:getSMT", err.Error())
}
threadlist = append(threadlist, int(id))
}
for i, x := range threadlist {
if x == cpuid {
return i
}
}
return 1
}
getNumaDomain := func(basepath string) int {
globPath := filepath.Join(basepath, "node*")
regexPath := filepath.Join(basepath, "node(\\d+)")
regex := regexp.MustCompile(regexPath)
files, err := filepath.Glob(globPath)
if err != nil {
cclogger.ComponentError("CCTopology", "CpuData:getNumaDomain", err.Error())
}
for _, f := range files {
finfo, err := os.Lstat(f)
if err == nil && finfo.IsDir() {
matches := regex.FindStringSubmatch(f)
if len(matches) == 2 {
id, err := strconv.Atoi(matches[1])
if err == nil {
return id
}
}
}
}
return 0
}
clist := make([]CpuEntry, 0)
for _, c := range CpuList() {
clist = append(clist, CpuEntry{Cpuid: c})
}
for _, centry := range clist {
centry.Socket = -1
centry.Numadomain = -1
centry.Die = -1
centry.Core = -1
// Set base directory for topology lookup
cpustr := fmt.Sprintf("cpu%d", centry.Cpuid)
base := filepath.Join("/sys/devices/system/cpu", cpustr)
topoBase := filepath.Join(base, "topology")
// Lookup CPU core id
centry.Core = getCore(topoBase)
// Lookup CPU socket id
centry.Socket = getSocket(topoBase)
// Lookup CPU die id
centry.Die = getDie(topoBase)
if centry.Die < 0 {
centry.Die = centry.Socket
}
// Lookup SMT thread id
centry.SMT = getSMT(centry.Cpuid, topoBase)
// Lookup NUMA domain id
centry.Numadomain = getNumaDomain(base)
}
return clist
}
type CpuInformation struct {
NumHWthreads int
SMTWidth int
NumSockets int
NumDies int
NumCores int
NumNumaDomains int
}
func CpuInfo() CpuInformation {
var c CpuInformation
smtList := make([]int, 0)
numaList := make([]int, 0)
dieList := make([]int, 0)
socketList := make([]int, 0)
coreList := make([]int, 0)
cdata := CpuData()
for _, d := range cdata {
if _, ok := intArrayContains(smtList, d.SMT); !ok {
smtList = append(smtList, d.SMT)
}
if _, ok := intArrayContains(numaList, d.Numadomain); !ok {
numaList = append(numaList, d.Numadomain)
}
if _, ok := intArrayContains(dieList, d.Die); !ok {
dieList = append(dieList, d.Die)
}
if _, ok := intArrayContains(socketList, d.Socket); !ok {
socketList = append(socketList, d.Socket)
}
if _, ok := intArrayContains(coreList, d.Core); !ok {
coreList = append(coreList, d.Core)
}
}
c.NumNumaDomains = len(numaList)
c.SMTWidth = len(smtList)
c.NumDies = len(dieList)
c.NumCores = len(coreList)
c.NumSockets = len(socketList)
c.NumHWthreads = len(cdata)
return c
}
func GetCpuSocket(cpuid int) int {
cdata := CpuData()
for _, d := range cdata {
if d.Cpuid == cpuid {
return d.Socket
}
}
return -1
}
func GetCpuNumaDomain(cpuid int) int {
cdata := CpuData()
for _, d := range cdata {
if d.Cpuid == cpuid {
return d.Numadomain
}
}
return -1
}
func GetCpuDie(cpuid int) int {
cdata := CpuData()
for _, d := range cdata {
if d.Cpuid == cpuid {
return d.Die
}
}
return -1
}
func GetCpuCore(cpuid int) int {
cdata := CpuData()
for _, d := range cdata {
if d.Cpuid == cpuid {
return d.Core
}
}
return -1
}
func GetSocketCpus(socket int) []int {
all := CpuData()
cpulist := make([]int, 0)
for _, d := range all {
if d.Socket == socket {
cpulist = append(cpulist, d.Cpuid)
}
}
return cpulist
}
func GetNumaDomainCpus(domain int) []int {
all := CpuData()
cpulist := make([]int, 0)
for _, d := range all {
if d.Numadomain == domain {
cpulist = append(cpulist, d.Cpuid)
}
}
return cpulist
}
func GetDieCpus(die int) []int {
all := CpuData()
cpulist := make([]int, 0)
for _, d := range all {
if d.Die == die {
cpulist = append(cpulist, d.Cpuid)
}
}
return cpulist
}
func GetCoreCpus(core int) []int {
all := CpuData()
cpulist := make([]int, 0)
for _, d := range all {
if d.Core == core {
cpulist = append(cpulist, d.Cpuid)
}
}
return cpulist
}

View File

@@ -9,10 +9,10 @@ import (
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
topo "github.com/ClusterCockpit/cc-metric-collector/internal/ccTopology"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
topo "github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
"github.com/PaesslerAG/gval"
)
@@ -31,14 +31,14 @@ type metricAggregator struct {
functions []*MetricAggregatorIntervalConfig
constants map[string]interface{}
language gval.Language
output chan lp.CCMetric
output chan lp.CCMessage
}
type MetricAggregator interface {
AddAggregation(name, function, condition string, tags, meta map[string]string) error
DeleteAggregation(name string) error
Init(output chan lp.CCMetric) error
Eval(starttime time.Time, endtime time.Time, metrics []lp.CCMetric)
Init(output chan lp.CCMessage) error
Eval(starttime time.Time, endtime time.Time, metrics []lp.CCMessage)
}
var metricCacheLanguage = gval.NewLanguage(
@@ -74,7 +74,7 @@ var evaluables = struct {
mapping: make(map[string]gval.Evaluable),
}
func (c *metricAggregator) Init(output chan lp.CCMetric) error {
func (c *metricAggregator) Init(output chan lp.CCMessage) error {
c.output = output
c.functions = make([]*MetricAggregatorIntervalConfig, 0)
c.constants = make(map[string]interface{})
@@ -112,7 +112,7 @@ func (c *metricAggregator) Init(output chan lp.CCMetric) error {
return nil
}
func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics []lp.CCMetric) {
func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics []lp.CCMessage) {
vars := make(map[string]interface{})
for k, v := range c.constants {
vars[k] = v
@@ -121,8 +121,13 @@ func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics
vars["endtime"] = endtime
for _, f := range c.functions {
cclog.ComponentDebug("MetricCache", "COLLECT", f.Name, "COND", f.Condition)
values := make([]float64, 0)
matches := make([]lp.CCMetric, 0)
var valuesFloat64 []float64
var valuesFloat32 []float32
var valuesInt []int
var valuesInt32 []int32
var valuesInt64 []int64
var valuesBool []bool
matches := make([]lp.CCMessage, 0)
for _, m := range metrics {
vars["metric"] = m
//value, err := gval.Evaluate(f.Condition, vars, c.language)
@@ -136,17 +141,17 @@ func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics
if valid {
switch x := v.(type) {
case float64:
values = append(values, x)
valuesFloat64 = append(valuesFloat64, x)
case float32:
valuesFloat32 = append(valuesFloat32, x)
case int:
valuesInt = append(valuesInt, x)
case int32:
valuesInt32 = append(valuesInt32, x)
case int64:
values = append(values, float64(x))
valuesInt64 = append(valuesInt64, x)
case bool:
if x {
values = append(values, float64(1.0))
} else {
values = append(values, float64(0.0))
}
valuesBool = append(valuesBool, x)
default:
cclog.ComponentError("MetricCache", "COLLECT ADD VALUE", v, "FAILED")
}
@@ -155,17 +160,63 @@ func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics
}
}
delete(vars, "metric")
cclog.ComponentDebug("MetricCache", "EVALUATE", f.Name, "METRICS", len(values), "CALC", f.Function)
vars["values"] = values
// Check, that only values of one type were collected
countValueTypes := 0
if len(valuesFloat64) > 0 {
countValueTypes += 1
}
if len(valuesFloat32) > 0 {
countValueTypes += 1
}
if len(valuesInt) > 0 {
countValueTypes += 1
}
if len(valuesInt32) > 0 {
countValueTypes += 1
}
if len(valuesInt64) > 0 {
countValueTypes += 1
}
if len(valuesBool) > 0 {
countValueTypes += 1
}
if countValueTypes > 1 {
cclog.ComponentError("MetricCache", "Collected values of different types")
}
var len_values int
switch {
case len(valuesFloat64) > 0:
vars["values"] = valuesFloat64
len_values = len(valuesFloat64)
case len(valuesFloat32) > 0:
vars["values"] = valuesFloat32
len_values = len(valuesFloat32)
case len(valuesInt) > 0:
vars["values"] = valuesInt
len_values = len(valuesInt)
case len(valuesInt32) > 0:
vars["values"] = valuesInt32
len_values = len(valuesInt32)
case len(valuesInt64) > 0:
vars["values"] = valuesInt64
len_values = len(valuesInt64)
case len(valuesBool) > 0:
vars["values"] = valuesBool
len_values = len(valuesBool)
}
cclog.ComponentDebug("MetricCache", "EVALUATE", f.Name, "METRICS", len_values, "CALC", f.Function)
vars["metrics"] = matches
if len(values) > 0 {
if len_values > 0 {
value, err := gval.Evaluate(f.Function, vars, c.language)
if err != nil {
cclog.ComponentError("MetricCache", "EVALUATE", f.Name, "METRICS", len(values), "CALC", f.Function, ":", err.Error())
cclog.ComponentError("MetricCache", "EVALUATE", f.Name, "METRICS", len_values, "CALC", f.Function, ":", err.Error())
break
}
copy_tags := func(tags map[string]string, metrics []lp.CCMetric) map[string]string {
copy_tags := func(tags map[string]string, metrics []lp.CCMessage) map[string]string {
out := make(map[string]string)
for key, value := range tags {
switch value {
@@ -182,7 +233,7 @@ func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics
}
return out
}
copy_meta := func(meta map[string]string, metrics []lp.CCMetric) map[string]string {
copy_meta := func(meta map[string]string, metrics []lp.CCMessage) map[string]string {
out := make(map[string]string)
for key, value := range meta {
switch value {
@@ -202,18 +253,18 @@ func (c *metricAggregator) Eval(starttime time.Time, endtime time.Time, metrics
tags := copy_tags(f.Tags, matches)
meta := copy_meta(f.Meta, matches)
var m lp.CCMetric
var m lp.CCMessage
switch t := value.(type) {
case float64:
m, err = lp.New(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
m, err = lp.NewMessage(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
case float32:
m, err = lp.New(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
m, err = lp.NewMessage(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
case int:
m, err = lp.New(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
m, err = lp.NewMessage(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
case int64:
m, err = lp.New(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
m, err = lp.NewMessage(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
case string:
m, err = lp.New(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
m, err = lp.NewMessage(f.Name, tags, meta, map[string]interface{}{"value": t}, starttime)
default:
cclog.ComponentError("MetricCache", "Gval returned invalid type", t, "skipping metric", f.Name)
}
@@ -316,7 +367,7 @@ func EvalBoolCondition(condition string, params map[string]interface{}) (bool, e
return value, err
}
func EvalFloat64Condition(condition string, params map[string]interface{}) (float64, error) {
func EvalFloat64Condition(condition string, params map[string]float64) (float64, error) {
evaluables.mutex.Lock()
evaluable, ok := evaluables.mapping[condition]
evaluables.mutex.Unlock()
@@ -338,7 +389,7 @@ func EvalFloat64Condition(condition string, params map[string]interface{}) (floa
return value, err
}
func NewAggregator(output chan lp.CCMetric) (MetricAggregator, error) {
func NewAggregator(output chan lp.CCMessage) (MetricAggregator, error) {
a := new(metricAggregator)
err := a.Init(output)
if err != nil {

View File

@@ -3,162 +3,167 @@ package metricAggregator
import (
"errors"
"fmt"
"math"
"regexp"
"sort"
"strings"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
topo "github.com/ClusterCockpit/cc-metric-collector/internal/ccTopology"
"golang.org/x/exp/slices"
topo "github.com/ClusterCockpit/cc-metric-collector/pkg/ccTopology"
)
/*
* Arithmetic functions on value arrays
*/
// Sum up values
func sumfunc(args ...interface{}) (interface{}, error) {
s := 0.0
values, ok := args[0].([]float64)
if ok {
cclog.ComponentDebug("MetricCache", "SUM FUNC START")
for _, x := range values {
s += x
}
cclog.ComponentDebug("MetricCache", "SUM FUNC END", s)
} else {
cclog.ComponentDebug("MetricCache", "SUM FUNC CAST FAILED")
func sumAnyType[T float64 | float32 | int | int32 | int64](values []T) (T, error) {
if len(values) == 0 {
return 0.0, errors.New("sum function requires at least one argument")
}
return s, nil
var sum T
for _, value := range values {
sum += value
}
return sum, nil
}
// Get the minimum value
func minfunc(args ...interface{}) (interface{}, error) {
var err error = nil
switch values := args[0].(type) {
// Sum up values
func sumfunc(args interface{}) (interface{}, error) {
var err error
switch values := args.(type) {
case []float64:
var s float64 = math.MaxFloat64
for _, x := range values {
if x < s {
s = x
}
}
return s, nil
return sumAnyType(values)
case []float32:
var s float32 = math.MaxFloat32
for _, x := range values {
if x < s {
s = x
}
}
return s, nil
return sumAnyType(values)
case []int:
var s int = int(math.MaxInt32)
for _, x := range values {
if x < s {
s = x
}
}
return s, nil
return sumAnyType(values)
case []int64:
var s int64 = math.MaxInt64
for _, x := range values {
if x < s {
s = x
}
}
return s, nil
return sumAnyType(values)
case []int32:
var s int32 = math.MaxInt32
for _, x := range values {
if x < s {
s = x
}
}
return s, nil
return sumAnyType(values)
default:
err = errors.New("function 'min' only on list of values (float64, float32, int, int32, int64)")
err = errors.New("function 'sum' only on list of values (float64, float32, int, int32, int64)")
}
return 0.0, err
}
// Get the average or mean value
func avgfunc(args ...interface{}) (interface{}, error) {
switch values := args[0].(type) {
case []float64:
var s float64 = 0
for _, x := range values {
s += x
}
return s / float64(len(values)), nil
case []float32:
var s float32 = 0
for _, x := range values {
s += x
}
return s / float32(len(values)), nil
case []int:
var s int = 0
for _, x := range values {
s += x
}
return s / len(values), nil
case []int64:
var s int64 = 0
for _, x := range values {
s += x
}
return s / int64(len(values)), nil
func minAnyType[T float64 | float32 | int | int32 | int64](values []T) (T, error) {
if len(values) == 0 {
return 0.0, errors.New("min function requires at least one argument")
}
return 0.0, nil
return slices.Min(values), nil
}
// Get the minimum value
func minfunc(args interface{}) (interface{}, error) {
switch values := args.(type) {
case []float64:
return minAnyType(values)
case []float32:
return minAnyType(values)
case []int:
return minAnyType(values)
case []int64:
return minAnyType(values)
case []int32:
return minAnyType(values)
default:
return 0.0, errors.New("function 'min' only on list of values (float64, float32, int, int32, int64)")
}
}
func avgAnyType[T float64 | float32 | int | int32 | int64](values []T) (float64, error) {
if len(values) == 0 {
return 0.0, errors.New("average function requires at least one argument")
}
sum, err := sumAnyType[T](values)
return float64(sum) / float64(len(values)), err
}
// Get the average or mean value
func avgfunc(args interface{}) (interface{}, error) {
switch values := args.(type) {
case []float64:
return avgAnyType(values)
case []float32:
return avgAnyType(values)
case []int:
return avgAnyType(values)
case []int64:
return avgAnyType(values)
case []int32:
return avgAnyType(values)
default:
return 0.0, errors.New("function 'average' only on list of values (float64, float32, int, int32, int64)")
}
}
func maxAnyType[T float64 | float32 | int | int32 | int64](values []T) (T, error) {
if len(values) == 0 {
return 0.0, errors.New("max function requires at least one argument")
}
return slices.Max(values), nil
}
// Get the maximum value
func maxfunc(args ...interface{}) (interface{}, error) {
s := 0.0
values, ok := args[0].([]float64)
if ok {
for _, x := range values {
if x > s {
s = x
}
}
func maxfunc(args interface{}) (interface{}, error) {
switch values := args.(type) {
case []float64:
return maxAnyType(values)
case []float32:
return maxAnyType(values)
case []int:
return maxAnyType(values)
case []int64:
return maxAnyType(values)
case []int32:
return maxAnyType(values)
default:
return 0.0, errors.New("function 'max' only on list of values (float64, float32, int, int32, int64)")
}
return s, nil
}
func medianAnyType[T float64 | float32 | int | int32 | int64](values []T) (T, error) {
if len(values) == 0 {
return 0.0, errors.New("median function requires at least one argument")
}
slices.Sort(values)
var median T
if midPoint := len(values) % 2; midPoint == 0 {
median = (values[midPoint-1] + values[midPoint]) / 2
} else {
median = values[midPoint]
}
return median, nil
}
// Get the median value
func medianfunc(args ...interface{}) (interface{}, error) {
switch values := args[0].(type) {
func medianfunc(args interface{}) (interface{}, error) {
switch values := args.(type) {
case []float64:
sort.Float64s(values)
return values[len(values)/2], nil
// case []float32:
// sort.Float64s(values)
// return values[len(values)/2], nil
return medianAnyType(values)
case []float32:
return medianAnyType(values)
case []int:
sort.Ints(values)
return values[len(values)/2], nil
// case []int64:
// sort.Ints(values)
// return values[len(values)/2], nil
// case []int32:
// sort.Ints(values)
// return values[len(values)/2], nil
return medianAnyType(values)
case []int64:
return medianAnyType(values)
case []int32:
return medianAnyType(values)
default:
return 0.0, errors.New("function 'median' only on list of values (float64, float32, int, int32, int64)")
}
return 0.0, errors.New("function 'median()' only on lists of type float64 and int")
}
/*
* Get number of values in list. Returns always an int
*/
func lenfunc(args ...interface{}) (interface{}, error) {
func lenfunc(args interface{}) (interface{}, error) {
var err error = nil
var length int = 0
switch values := args[0].(type) {
switch values := args.(type) {
case []float64:
length = len(values)
case []float32:
@@ -243,49 +248,49 @@ func matchfunc(args ...interface{}) (interface{}, error) {
*/
// for a given cpuid, it returns the core id
func getCpuCoreFunc(args ...interface{}) (interface{}, error) {
switch cpuid := args[0].(type) {
func getCpuCoreFunc(args interface{}) (interface{}, error) {
switch cpuid := args.(type) {
case int:
return topo.GetCpuCore(cpuid), nil
return topo.GetHwthreadCore(cpuid), nil
}
return -1, errors.New("function 'getCpuCore' accepts only an 'int' cpuid")
}
// for a given cpuid, it returns the socket id
func getCpuSocketFunc(args ...interface{}) (interface{}, error) {
switch cpuid := args[0].(type) {
func getCpuSocketFunc(args interface{}) (interface{}, error) {
switch cpuid := args.(type) {
case int:
return topo.GetCpuSocket(cpuid), nil
return topo.GetHwthreadSocket(cpuid), nil
}
return -1, errors.New("function 'getCpuCore' accepts only an 'int' cpuid")
}
// for a given cpuid, it returns the id of the NUMA node
func getCpuNumaDomainFunc(args ...interface{}) (interface{}, error) {
switch cpuid := args[0].(type) {
func getCpuNumaDomainFunc(args interface{}) (interface{}, error) {
switch cpuid := args.(type) {
case int:
return topo.GetCpuNumaDomain(cpuid), nil
return topo.GetHwthreadNumaDomain(cpuid), nil
}
return -1, errors.New("function 'getCpuNuma' accepts only an 'int' cpuid")
}
// for a given cpuid, it returns the id of the CPU die
func getCpuDieFunc(args ...interface{}) (interface{}, error) {
switch cpuid := args[0].(type) {
func getCpuDieFunc(args interface{}) (interface{}, error) {
switch cpuid := args.(type) {
case int:
return topo.GetCpuDie(cpuid), nil
return topo.GetHwthreadDie(cpuid), nil
}
return -1, errors.New("function 'getCpuDie' accepts only an 'int' cpuid")
}
// for a given core id, it returns the list of cpuids
func getCpuListOfCoreFunc(args ...interface{}) (interface{}, error) {
func getCpuListOfCoreFunc(args interface{}) (interface{}, error) {
cpulist := make([]int, 0)
switch in := args[0].(type) {
switch in := args.(type) {
case int:
for _, c := range topo.CpuData() {
if c.Core == in {
cpulist = append(cpulist, c.Cpuid)
cpulist = append(cpulist, c.CpuID)
}
}
}
@@ -293,13 +298,13 @@ func getCpuListOfCoreFunc(args ...interface{}) (interface{}, error) {
}
// for a given socket id, it returns the list of cpuids
func getCpuListOfSocketFunc(args ...interface{}) (interface{}, error) {
func getCpuListOfSocketFunc(args interface{}) (interface{}, error) {
cpulist := make([]int, 0)
switch in := args[0].(type) {
switch in := args.(type) {
case int:
for _, c := range topo.CpuData() {
if c.Socket == in {
cpulist = append(cpulist, c.Cpuid)
cpulist = append(cpulist, c.CpuID)
}
}
}
@@ -307,13 +312,13 @@ func getCpuListOfSocketFunc(args ...interface{}) (interface{}, error) {
}
// for a given id of a NUMA domain, it returns the list of cpuids
func getCpuListOfNumaDomainFunc(args ...interface{}) (interface{}, error) {
func getCpuListOfNumaDomainFunc(args interface{}) (interface{}, error) {
cpulist := make([]int, 0)
switch in := args[0].(type) {
switch in := args.(type) {
case int:
for _, c := range topo.CpuData() {
if c.Numadomain == in {
cpulist = append(cpulist, c.Cpuid)
if c.NumaDomain == in {
cpulist = append(cpulist, c.CpuID)
}
}
}
@@ -321,13 +326,13 @@ func getCpuListOfNumaDomainFunc(args ...interface{}) (interface{}, error) {
}
// for a given CPU die id, it returns the list of cpuids
func getCpuListOfDieFunc(args ...interface{}) (interface{}, error) {
func getCpuListOfDieFunc(args interface{}) (interface{}, error) {
cpulist := make([]int, 0)
switch in := args[0].(type) {
switch in := args.(type) {
case int:
for _, c := range topo.CpuData() {
if c.Die == in {
cpulist = append(cpulist, c.Cpuid)
cpulist = append(cpulist, c.CpuID)
}
}
}
@@ -335,8 +340,8 @@ func getCpuListOfDieFunc(args ...interface{}) (interface{}, error) {
}
// wrapper function to get a list of all cpuids of the node
func getCpuListOfNode(args ...interface{}) (interface{}, error) {
return topo.CpuList(), nil
func getCpuListOfNode() (interface{}, error) {
return topo.HwthreadList(), nil
}
// helper function to get the cpuid list for a CCMetric type tag set (type and type-id)
@@ -348,14 +353,14 @@ func getCpuListOfType(args ...interface{}) (interface{}, error) {
case string:
switch typ {
case "node":
return topo.CpuList(), nil
return topo.HwthreadList(), nil
case "socket":
return getCpuListOfSocketFunc(args[1])
case "numadomain":
return getCpuListOfNumaDomainFunc(args[1])
case "core":
return getCpuListOfCoreFunc(args[1])
case "cpu":
case "hwthread":
var cpu int
switch id := args[1].(type) {

View File

@@ -1,13 +1,21 @@
# CC Metric Router
The CCMetric router sits in between the collectors and the sinks and can be used to add and remove tags to/from traversing [CCMetrics](../ccMetric/README.md).
The CCMetric router sits in between the collectors and the sinks and can be used to add and remove tags to/from traversing [CCMessages](https://pkg.go.dev/github.com/ClusterCockpit/cc-energy-manager@v0.0.0-20240919152819-92a17f2da4f7/pkg/cc-message.
# Configuration
**Note**: Use the [message processor configuration](../../pkg/messageProcessor/README.md) with option `process_messages`.
```json
{
"num_cache_intervals" : 1,
"interval_timestamp" : true,
"hostname_tag" : "hostname",
"max_forward" : 50,
"process_messages": {
"see": "pkg/messageProcessor/README.md"
},
"add_tags" : [
{
"key" : "cluster",
@@ -50,11 +58,32 @@ The CCMetric router sits in between the collectors and the sinks and can be used
],
"rename_metrics" : {
"metric_12345" : "mymetric"
},
"normalize_units" : true,
"change_unit_prefix" : {
"mem_used" : "G",
"mem_total" : "G"
}
}
```
There are three main options `add_tags`, `delete_tags` and `interval_timestamp`. `add_tags` and `delete_tags` are lists consisting of dicts with `key`, `value` and `if`. The `value` can be omitted in the `delete_tags` part as it only uses the `key` for removal. The `interval_timestamp` setting means that a unique timestamp is applied to all metrics traversing the router during an interval.
**Note**: Use the [message processor configuration](../../pkg/messageProcessor/README.md) (option `process_messages`) instead of `add_tags`, `delete_tags`, `drop_metrics`, `drop_metrics_if`, `rename_metrics`, `normalize_units` and `change_unit_prefix`. These options are deprecated and will be removed in future versions. Until then, they are added to the message processor.
# Processing order in the router
- Add the `hostname_tag` tag (if sent by collectors or cache)
- If `interval_timestamp == true`, change time of metrics
- Check if metric should be dropped (`drop_metrics` and `drop_metrics_if`)
- Add tags from `add_tags`
- Delete tags from `del_tags`
- Rename metric based on `rename_metrics` and store old name as `oldname` in meta information
- Add tags from `add_tags` (if you used the new name in the `if` condition)
- Delete tags from `del_tags` (if you used the new name in the `if` condition)
- Send to sinks
- Move to cache (if `num_cache_intervals > 0`)
# The `interval_timestamp` option
The collectors' `Read()` functions are not called simultaneously and therefore the metrics gathered in an interval can have different timestamps. If you want to avoid that and have a common timestamp (the beginning of the interval), set this option to `true` and the MetricRouter sets the time.
@@ -65,8 +94,18 @@ If the MetricRouter should buffer metrics of intervals in a MetricCache, this op
A `num_cache_intervals > 0` is required to use the `interval_aggregates` option.
# The `hostname_tag` option
By default, the router tags metrics with the hostname for all locally created metrics. The default tag name is `hostname`, but it can be changed if your organization wants anything else
# The `max_forward` option
Every time the router receives a metric through any of the channels, it tries to directly read up to `max_forward` metrics from the same channel. This was done as the router thread would go to sleep and wake up with every arriving metric. The default are `50` metrics at once and `max_forward` needs to greater than `1`.
# The `rename_metrics` option
__deprecated__
In the ClusterCockpit world we specified a set of standard metrics. Since some collectors determine the metric names based on files, execuables and libraries, they might change from system to system (or installation to installtion, OS to OS, ...). In order to get the common names, you can rename incoming metrics before sending them to the sink. If the metric name matches the `oldname`, it is changed to `newname`
```json
@@ -78,6 +117,8 @@ In the ClusterCockpit world we specified a set of standard metrics. Since some c
# Conditional manipulation of tags (`add_tags` and `del_tags`)
__deprecated__
Common config format:
```json
{
@@ -89,6 +130,8 @@ Common config format:
## The `del_tags` option
__deprecated__
The collectors are free to add whatever `key=value` pair to the metric tags (although the usage of tags should be minimized). If you want to delete a tag afterwards, you can do that. When the `if` condition matches on a metric, the `key` is removed from the metric's tags.
If you want to remove a tag for all metrics, use the condition wildcard `*`. The `value` field can be omitted in the `del_tags` case.
@@ -100,6 +143,8 @@ Never delete tags:
## The `add_tags` option
__deprecated__
In some cases, metrics should be tagged or an existing tag changed based on some condition. This can be done in the `add_tags` section. When the `if` condition evaluates to `true`, the tag `key` is added or gets changed to the new `value`.
If the CCMetric name is equal to `temp_package_id_0`, it adds an additional tag `test=testing` to the metric.
@@ -141,6 +186,8 @@ In some cases, you want to drop a metric and don't get it forwarded to the sinks
## The `drop_metrics` section
__deprecated__
The argument is a list of metric names. No futher checks are performed, only a comparison of the metric name
```json
@@ -156,6 +203,8 @@ The example drops all metrics with the name `drop_metric_1` and `drop_metric_2`.
## The `drop_metrics_if` section
__deprecated__
This option takes a list of evaluable conditions and performs them one after the other on **all** metrics incoming from the collectors and the metric cache (aka `interval_aggregates`).
```json
@@ -168,10 +217,25 @@ This option takes a list of evaluable conditions and performs them one after the
```
The first line is comparable with the example in `drop_metrics`, it drops all metrics starting with `drop_metric_` and ending with a number. The second line drops all metrics of the first hardware thread (**not** recommended)
# Manipulating the metric units
## The `normalize_units` option
__deprecated__
The cc-metric-collector tries to read the data from the system as it is reported. If available, it tries to read the metric unit from the system as well (e.g. from `/proc/meminfo`). The problem is that, depending on the source, the metric units are named differently. Just think about `byte`, `Byte`, `B`, `bytes`, ...
The [cc-units](https://github.com/ClusterCockpit/cc-units) package provides us a normalization option to use the same metric unit name for all metrics. It this option is set to true, all `unit` meta tags are normalized.
## The `change_unit_prefix` section
__deprecated__
It is often the case that metrics are reported by the system using a rather outdated unit prefix (like `/proc/meminfo` still uses kByte despite current memory sizes are in the GByte range). If you want to change the prefix of a unit, you can do that with the help of [cc-units](https://github.com/ClusterCockpit/cc-units). The setting works on the metric name and requires the new prefix for the metric. The cc-units package determines the scaling factor.
# Aggregate metric values of the current interval with the `interval_aggregates` option
**Note:** `interval_aggregates` works only if `num_cache_intervals` > 0
**Note:** `interval_aggregates` works only if `num_cache_intervals` > 0 and is **experimental**
In some cases, you need to derive new metrics based on the metrics arriving during an interval. This can be done in the `interval_aggregates` section. The logic is similar to the other metric manipulation and filtering options. A cache stores all metrics that arrive during an interval. At the beginning of the *next* interval, the list of metrics is submitted to the MetricAggregator. It derives new metrics and submits them back to the MetricRouter, so they are sent in the next interval but have the timestamp of the previous interval beginning.
@@ -215,3 +279,22 @@ Use cases for `interval_aggregates`:
}
}
```
# Order of operations
The router performs the above mentioned options in a specific order. In order to get the logic you want for a specific metric, it is crucial to know the processing order:
- Add the `hostname` tag (c)
- Manipulate the timestamp to the interval timestamp (c,r)
- Drop metrics based on `drop_metrics` and `drop_metrics_if` (c,r)
- Add tags based on `add_tags` (c,r)
- Delete tags based on `del_tags` (c,r)
- Rename metric based on `rename_metric` (c,r)
- Add tags based on `add_tags` to still work if the configuration uses the new name (c,r)
- Delete tags based on `del_tags` to still work if the configuration uses the new name (c,r)
- Normalize units when `normalize_units` is set (c,r)
- Convert unit prefix based on `change_unit_prefix` (c,r)
Legend:
- 'c' if metric is coming from a collector
- 'r' if metric is coming from a receiver

View File

@@ -4,11 +4,11 @@ import (
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
agg "github.com/ClusterCockpit/cc-metric-collector/internal/metricAggregator"
mct "github.com/ClusterCockpit/cc-metric-collector/internal/multiChanTicker"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
type metricCachePeriod struct {
@@ -16,7 +16,7 @@ type metricCachePeriod struct {
stopstamp time.Time
numMetrics int
sizeMetrics int
metrics []lp.CCMetric
metrics []lp.CCMessage
}
// Metric cache data structure
@@ -29,21 +29,21 @@ type metricCache struct {
ticker mct.MultiChanTicker
tickchan chan time.Time
done chan bool
output chan lp.CCMetric
output chan lp.CCMessage
aggEngine agg.MetricAggregator
}
type MetricCache interface {
Init(output chan lp.CCMetric, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) error
Init(output chan lp.CCMessage, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) error
Start()
Add(metric lp.CCMetric)
GetPeriod(index int) (time.Time, time.Time, []lp.CCMetric)
Add(metric lp.CCMessage)
GetPeriod(index int) (time.Time, time.Time, []lp.CCMessage)
AddAggregation(name, function, condition string, tags, meta map[string]string) error
DeleteAggregation(name string) error
Close()
}
func (c *metricCache) Init(output chan lp.CCMetric, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) error {
func (c *metricCache) Init(output chan lp.CCMessage, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) error {
var err error = nil
c.done = make(chan bool)
c.wg = wg
@@ -55,7 +55,7 @@ func (c *metricCache) Init(output chan lp.CCMetric, ticker mct.MultiChanTicker,
p := new(metricCachePeriod)
p.numMetrics = 0
p.sizeMetrics = 0
p.metrics = make([]lp.CCMetric, 0)
p.metrics = make([]lp.CCMessage, 0)
c.intervals = append(c.intervals, p)
}
@@ -124,7 +124,7 @@ func (c *metricCache) Start() {
// Add a metric to the cache. The interval is defined by the global timer (rotate() in Start())
// The intervals list is used as round-robin buffer and the metric list grows dynamically and
// to avoid reallocations
func (c *metricCache) Add(metric lp.CCMetric) {
func (c *metricCache) Add(metric lp.CCMessage) {
if c.curPeriod >= 0 && c.curPeriod < c.numPeriods {
c.lock.Lock()
p := c.intervals[c.curPeriod]
@@ -153,10 +153,10 @@ func (c *metricCache) DeleteAggregation(name string) error {
// Get all metrics of a interval. The index is the difference to the current interval, so index=0
// is the current one, index=1 the last interval and so on. Returns and empty array if a wrong index
// is given (negative index, index larger than configured number of total intervals, ...)
func (c *metricCache) GetPeriod(index int) (time.Time, time.Time, []lp.CCMetric) {
func (c *metricCache) GetPeriod(index int) (time.Time, time.Time, []lp.CCMessage) {
var start time.Time = time.Now()
var stop time.Time = time.Now()
var metrics []lp.CCMetric
var metrics []lp.CCMessage
if index >= 0 && index < c.numPeriods {
pindex := c.curPeriod - index
if pindex < 0 {
@@ -168,10 +168,10 @@ func (c *metricCache) GetPeriod(index int) (time.Time, time.Time, []lp.CCMetric)
metrics = c.intervals[pindex].metrics
//return c.intervals[pindex].startstamp, c.intervals[pindex].stopstamp, c.intervals[pindex].metrics
} else {
metrics = make([]lp.CCMetric, 0)
metrics = make([]lp.CCMessage, 0)
}
} else {
metrics = make([]lp.CCMetric, 0)
metrics = make([]lp.CCMessage, 0)
}
return start, stop, metrics
}
@@ -182,7 +182,7 @@ func (c *metricCache) Close() {
c.done <- true
}
func NewCache(output chan lp.CCMetric, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) (MetricCache, error) {
func NewCache(output chan lp.CCMessage, ticker mct.MultiChanTicker, wg *sync.WaitGroup, numPeriods int) (MetricCache, error) {
c := new(metricCache)
err := c.Init(output, ticker, wg, numPeriods)
if err != nil {

View File

@@ -2,16 +2,18 @@ package metricRouter
import (
"encoding/json"
"fmt"
"os"
"strings"
"sync"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
lp "github.com/ClusterCockpit/cc-lib/ccMessage"
mp "github.com/ClusterCockpit/cc-lib/messageProcessor"
agg "github.com/ClusterCockpit/cc-metric-collector/internal/metricAggregator"
mct "github.com/ClusterCockpit/cc-metric-collector/internal/multiChanTicker"
mct "github.com/ClusterCockpit/cc-metric-collector/pkg/multiChanTicker"
)
const ROUTER_MAX_FORWARD = 50
@@ -25,6 +27,7 @@ type metricRouterTagConfig struct {
// Metric router configuration
type metricRouterConfig struct {
HostnameTagName string `json:"hostname_tag"` // Key name used when adding the hostname to a metric (default 'hostname')
AddTags []metricRouterTagConfig `json:"add_tags"` // List of tags that are added when the condition is met
DelTags []metricRouterTagConfig `json:"delete_tags"` // List of tags that are removed when the condition is met
IntervalAgg []agg.MetricAggregatorIntervalConfig `json:"interval_aggregates"` // List of aggregation function processed at the end of an interval
@@ -33,33 +36,37 @@ type metricRouterConfig struct {
RenameMetrics map[string]string `json:"rename_metrics"` // Map to rename metric name from key to value
IntervalStamp bool `json:"interval_timestamp"` // Update timestamp periodically by ticker each interval?
NumCacheIntervals int `json:"num_cache_intervals"` // Number of intervals of cached metrics for evaluation
dropMetrics map[string]bool // Internal map for O(1) lookup
MaxForward int `json:"max_forward"` // Number of maximal forwarded metrics at one select
NormalizeUnits bool `json:"normalize_units"` // Check unit meta flag and normalize it using cc-units
ChangeUnitPrefix map[string]string `json:"change_unit_prefix"` // Add prefix that should be applied to the metrics
// dropMetrics map[string]bool // Internal map for O(1) lookup
MessageProcessor json.RawMessage `json:"process_messages,omitempty"`
}
// Metric router data structure
type metricRouter struct {
hostname string // Hostname used in tags
coll_input chan lp.CCMetric // Input channel from CollectorManager
recv_input chan lp.CCMetric // Input channel from ReceiveManager
cache_input chan lp.CCMetric // Input channel from MetricCache
outputs []chan lp.CCMetric // List of all output channels
coll_input chan lp.CCMessage // Input channel from CollectorManager
recv_input chan lp.CCMessage // Input channel from ReceiveManager
cache_input chan lp.CCMessage // Input channel from MetricCache
outputs []chan lp.CCMessage // List of all output channels
done chan bool // channel to finish / stop metric router
wg *sync.WaitGroup // wait group for all goroutines in cc-metric-collector
timestamp time.Time // timestamp periodically updated by ticker each interval
timerdone chan bool // channel to finish / stop timestamp updater
ticker mct.MultiChanTicker // periodically ticking once each interval
config metricRouterConfig // json encoded config for metric router
cache MetricCache // pointer to MetricCache
cachewg sync.WaitGroup // wait group for MetricCache
maxForward int // number of metrics to forward maximally in one iteration
mp mp.MessageProcessor
}
// MetricRouter access functions
type MetricRouter interface {
Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, routerConfigFile string) error
AddCollectorInput(input chan lp.CCMetric)
AddReceiverInput(input chan lp.CCMetric)
AddOutput(output chan lp.CCMetric)
Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, routerConfig json.RawMessage) error
AddCollectorInput(input chan lp.CCMessage)
AddReceiverInput(input chan lp.CCMessage)
AddOutput(output chan lp.CCMessage)
Start()
Close()
}
@@ -70,13 +77,14 @@ type MetricRouter interface {
// * wait group synchronization (from variable wg)
// * ticker (from variable ticker)
// * configuration (read from config file in variable routerConfigFile)
func (r *metricRouter) Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, routerConfigFile string) error {
r.outputs = make([]chan lp.CCMetric, 0)
func (r *metricRouter) Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, routerConfig json.RawMessage) error {
r.outputs = make([]chan lp.CCMessage, 0)
r.done = make(chan bool)
r.cache_input = make(chan lp.CCMetric)
r.cache_input = make(chan lp.CCMessage)
r.wg = wg
r.ticker = ticker
r.maxForward = ROUTER_MAX_FORWARD
r.config.MaxForward = ROUTER_MAX_FORWARD
r.config.HostnameTagName = "hostname"
// Set hostname
hostname, err := os.Hostname()
@@ -87,18 +95,14 @@ func (r *metricRouter) Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, rout
// Drop domain part of host name
r.hostname = strings.SplitN(hostname, `.`, 2)[0]
// Read metric router config file
configFile, err := os.Open(routerConfigFile)
err = json.Unmarshal(routerConfig, &r.config)
if err != nil {
cclog.ComponentError("MetricRouter", err.Error())
return err
}
defer configFile.Close()
jsonParser := json.NewDecoder(configFile)
err = jsonParser.Decode(&r.config)
if err != nil {
cclog.ComponentError("MetricRouter", err.Error())
return err
r.maxForward = 1
if r.config.MaxForward > r.maxForward {
r.maxForward = r.config.MaxForward
}
if r.config.NumCacheIntervals > 0 {
r.cache, err = NewCache(r.cache_input, r.ticker, &r.cachewg, r.config.NumCacheIntervals)
@@ -110,37 +114,56 @@ func (r *metricRouter) Init(ticker mct.MultiChanTicker, wg *sync.WaitGroup, rout
r.cache.AddAggregation(agg.Name, agg.Function, agg.Condition, agg.Tags, agg.Meta)
}
}
r.config.dropMetrics = make(map[string]bool)
for _, mname := range r.config.DropMetrics {
r.config.dropMetrics[mname] = true
p, err := mp.NewMessageProcessor()
if err != nil {
return fmt.Errorf("initialization of message processor failed: %v", err.Error())
}
r.mp = p
if len(r.config.MessageProcessor) > 0 {
err = r.mp.FromConfigJSON(r.config.MessageProcessor)
if err != nil {
return fmt.Errorf("failed parsing JSON for message processor: %v", err.Error())
}
}
for _, mname := range r.config.DropMetrics {
r.mp.AddDropMessagesByName(mname)
}
for _, cond := range r.config.DropMetricsIf {
r.mp.AddDropMessagesByCondition(cond)
}
for _, data := range r.config.AddTags {
cond := data.Condition
if cond == "*" {
cond = "true"
}
r.mp.AddAddTagsByCondition(cond, data.Key, data.Value)
}
for _, data := range r.config.DelTags {
cond := data.Condition
if cond == "*" {
cond = "true"
}
r.mp.AddDeleteTagsByCondition(cond, data.Key, data.Value)
}
for oldname, newname := range r.config.RenameMetrics {
r.mp.AddRenameMetricByName(oldname, newname)
}
for metricName, prefix := range r.config.ChangeUnitPrefix {
r.mp.AddChangeUnitPrefix(fmt.Sprintf("name == '%s'", metricName), prefix)
}
r.mp.SetNormalizeUnits(r.config.NormalizeUnits)
r.mp.AddAddTagsByCondition("true", r.config.HostnameTagName, r.hostname)
// r.config.dropMetrics = make(map[string]bool)
// for _, mname := range r.config.DropMetrics {
// r.config.dropMetrics[mname] = true
// }
return nil
}
// StartTimer starts a timer which updates timestamp periodically
func (r *metricRouter) StartTimer() {
m := make(chan time.Time)
r.ticker.AddChannel(m)
r.timerdone = make(chan bool)
r.wg.Add(1)
go func() {
defer r.wg.Done()
for {
select {
case <-r.timerdone:
close(r.timerdone)
cclog.ComponentDebug("MetricRouter", "TIMER DONE")
return
case t := <-m:
r.timestamp = t
}
}
}()
cclog.ComponentDebug("MetricRouter", "TIMER START")
}
func getParamMap(point lp.CCMetric) map[string]interface{} {
func getParamMap(point lp.CCMessage) map[string]interface{} {
params := make(map[string]interface{})
params["metric"] = point
params["name"] = point.Name()
@@ -158,7 +181,7 @@ func getParamMap(point lp.CCMetric) map[string]interface{} {
}
// DoAddTags adds a tag when condition is fullfiled
func (r *metricRouter) DoAddTags(point lp.CCMetric) {
func (r *metricRouter) DoAddTags(point lp.CCMessage) {
var conditionMatches bool
for _, m := range r.config.AddTags {
if m.Condition == "*" {
@@ -180,56 +203,89 @@ func (r *metricRouter) DoAddTags(point lp.CCMetric) {
}
// DoDelTags removes a tag when condition is fullfiled
func (r *metricRouter) DoDelTags(point lp.CCMetric) {
var conditionMatches bool
for _, m := range r.config.DelTags {
if m.Condition == "*" {
// Condition is always matched
conditionMatches = true
} else {
// Evaluate condition
var err error
conditionMatches, err = agg.EvalBoolCondition(m.Condition, getParamMap(point))
if err != nil {
cclog.ComponentError("MetricRouter", err.Error())
conditionMatches = false
}
}
if conditionMatches {
point.RemoveTag(m.Key)
}
}
}
// func (r *metricRouter) DoDelTags(point lp.CCMessage) {
// var conditionMatches bool
// for _, m := range r.config.DelTags {
// if m.Condition == "*" {
// // Condition is always matched
// conditionMatches = true
// } else {
// // Evaluate condition
// var err error
// conditionMatches, err = agg.EvalBoolCondition(m.Condition, getParamMap(point))
// if err != nil {
// cclog.ComponentError("MetricRouter", err.Error())
// conditionMatches = false
// }
// }
// if conditionMatches {
// point.RemoveTag(m.Key)
// }
// }
// }
// Conditional test whether a metric should be dropped
func (r *metricRouter) dropMetric(point lp.CCMetric) bool {
// Simple drop check
if conditionMatches, ok := r.config.dropMetrics[point.Name()]; ok {
return conditionMatches
}
// func (r *metricRouter) dropMetric(point lp.CCMessage) bool {
// // Simple drop check
// if conditionMatches, ok := r.config.dropMetrics[point.Name()]; ok {
// return conditionMatches
// }
// Checking the dropping conditions
for _, m := range r.config.DropMetricsIf {
conditionMatches, err := agg.EvalBoolCondition(m, getParamMap(point))
if err != nil {
cclog.ComponentError("MetricRouter", err.Error())
conditionMatches = false
}
if conditionMatches {
return conditionMatches
}
}
// // Checking the dropping conditions
// for _, m := range r.config.DropMetricsIf {
// conditionMatches, err := agg.EvalBoolCondition(m, getParamMap(point))
// if err != nil {
// cclog.ComponentError("MetricRouter", err.Error())
// conditionMatches = false
// }
// if conditionMatches {
// return conditionMatches
// }
// }
// No dropping condition met
return false
}
// // No dropping condition met
// return false
// }
// func (r *metricRouter) prepareUnit(point lp.CCMessage) bool {
// if r.config.NormalizeUnits {
// if in_unit, ok := point.GetMeta("unit"); ok {
// u := units.NewUnit(in_unit)
// if u.Valid() {
// point.AddMeta("unit", u.Short())
// }
// }
// }
// if newP, ok := r.config.ChangeUnitPrefix[point.Name()]; ok {
// newPrefix := units.NewPrefix(newP)
// if in_unit, ok := point.GetMeta("unit"); ok && newPrefix != units.InvalidPrefix {
// u := units.NewUnit(in_unit)
// if u.Valid() {
// cclog.ComponentDebug("MetricRouter", "Change prefix to", newP, "for metric", point.Name())
// conv, out_unit := units.GetUnitPrefixFactor(u, newPrefix)
// if conv != nil && out_unit.Valid() {
// if val, ok := point.GetField("value"); ok {
// point.AddField("value", conv(val))
// point.AddMeta("unit", out_unit.Short())
// }
// }
// }
// }
// }
// return true
// }
// Start starts the metric router
func (r *metricRouter) Start() {
// start timer if configured
r.timestamp = time.Now()
timeChan := make(chan time.Time)
if r.config.IntervalStamp {
r.StartTimer()
r.ticker.AddChannel(timeChan)
}
// Router manager is done
@@ -240,55 +296,75 @@ func (r *metricRouter) Start() {
// Forward takes a received metric, adds or deletes tags
// and forwards it to the output channels
forward := func(point lp.CCMetric) {
cclog.ComponentDebug("MetricRouter", "FORWARD", point)
r.DoAddTags(point)
r.DoDelTags(point)
if new, ok := r.config.RenameMetrics[point.Name()]; ok {
point.SetName(new)
}
r.DoAddTags(point)
r.DoDelTags(point)
// forward := func(point lp.CCMessage) {
// cclog.ComponentDebug("MetricRouter", "FORWARD", point)
// r.DoAddTags(point)
// r.DoDelTags(point)
// name := point.Name()
// if new, ok := r.config.RenameMetrics[name]; ok {
// point.SetName(new)
// point.AddMeta("oldname", name)
// r.DoAddTags(point)
// r.DoDelTags(point)
// }
for _, o := range r.outputs {
o <- point
}
}
// r.prepareUnit(point)
// for _, o := range r.outputs {
// o <- point
// }
// }
// Foward message received from collector channel
coll_forward := func(p lp.CCMetric) {
coll_forward := func(p lp.CCMessage) {
// receive from metric collector
p.AddTag("hostname", r.hostname)
//p.AddTag(r.config.HostnameTagName, r.hostname)
if r.config.IntervalStamp {
p.SetTime(r.timestamp)
}
if !r.dropMetric(p) {
forward(p)
m, err := r.mp.ProcessMessage(p)
if err == nil && m != nil {
for _, o := range r.outputs {
o <- m
}
}
// if !r.dropMetric(p) {
// for _, o := range r.outputs {
// o <- point
// }
// }
// even if the metric is dropped, it is stored in the cache for
// aggregations
if r.config.NumCacheIntervals > 0 {
r.cache.Add(p)
r.cache.Add(m)
}
}
// Forward message received from receivers channel
recv_forward := func(p lp.CCMetric) {
recv_forward := func(p lp.CCMessage) {
// receive from receive manager
if r.config.IntervalStamp {
p.SetTime(r.timestamp)
}
if !r.dropMetric(p) {
forward(p)
m, err := r.mp.ProcessMessage(p)
if err == nil && m != nil {
for _, o := range r.outputs {
o <- m
}
}
// if !r.dropMetric(p) {
// forward(p)
// }
}
// Forward message received from cache channel
cache_forward := func(p lp.CCMetric) {
cache_forward := func(p lp.CCMessage) {
// receive from metric collector
if !r.dropMetric(p) {
p.AddTag("hostname", r.hostname)
forward(p)
m, err := r.mp.ProcessMessage(p)
if err == nil && m != nil {
for _, o := range r.outputs {
o <- m
}
}
}
@@ -307,21 +383,25 @@ func (r *metricRouter) Start() {
done()
return
case timestamp := <-timeChan:
r.timestamp = timestamp
cclog.ComponentDebug("MetricRouter", "Update timestamp", r.timestamp.UnixNano())
case p := <-r.coll_input:
coll_forward(p)
for i := 0; len(r.coll_input) > 0 && i < r.maxForward; i++ {
for i := 0; len(r.coll_input) > 0 && i < (r.maxForward-1); i++ {
coll_forward(<-r.coll_input)
}
case p := <-r.recv_input:
recv_forward(p)
for i := 0; len(r.recv_input) > 0 && i < r.maxForward; i++ {
for i := 0; len(r.recv_input) > 0 && i < (r.maxForward-1); i++ {
recv_forward(<-r.recv_input)
}
case p := <-r.cache_input:
cache_forward(p)
for i := 0; len(r.cache_input) > 0 && i < r.maxForward; i++ {
for i := 0; len(r.cache_input) > 0 && i < (r.maxForward-1); i++ {
cache_forward(<-r.cache_input)
}
}
@@ -331,17 +411,17 @@ func (r *metricRouter) Start() {
}
// AddCollectorInput adds a channel between metric collector and metric router
func (r *metricRouter) AddCollectorInput(input chan lp.CCMetric) {
func (r *metricRouter) AddCollectorInput(input chan lp.CCMessage) {
r.coll_input = input
}
// AddReceiverInput adds a channel between metric receiver and metric router
func (r *metricRouter) AddReceiverInput(input chan lp.CCMetric) {
func (r *metricRouter) AddReceiverInput(input chan lp.CCMessage) {
r.recv_input = input
}
// AddOutput adds a output channel to the metric router
func (r *metricRouter) AddOutput(output chan lp.CCMetric) {
func (r *metricRouter) AddOutput(output chan lp.CCMessage) {
r.outputs = append(r.outputs, output)
}
@@ -352,14 +432,6 @@ func (r *metricRouter) Close() {
// wait for close of channel r.done
<-r.done
// stop timer
if r.config.IntervalStamp {
cclog.ComponentDebug("MetricRouter", "TIMER CLOSE")
r.timerdone <- true
// wait for close of channel r.timerdone
<-r.timerdone
}
// stop metric cache
if r.config.NumCacheIntervals > 0 {
cclog.ComponentDebug("MetricRouter", "CACHE CLOSE")
@@ -369,9 +441,9 @@ func (r *metricRouter) Close() {
}
// New creates a new initialized metric router
func New(ticker mct.MultiChanTicker, wg *sync.WaitGroup, routerConfigFile string) (MetricRouter, error) {
func New(ticker mct.MultiChanTicker, wg *sync.WaitGroup, routerConfig json.RawMessage) (MetricRouter, error) {
r := new(metricRouter)
err := r.Init(ticker, wg, routerConfigFile)
err := r.Init(ticker, wg, routerConfig)
if err != nil {
return nil, err
}

View File

@@ -0,0 +1,463 @@
package ccTopology
import (
"fmt"
"log"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
cclogger "github.com/ClusterCockpit/cc-lib/ccLogger"
"golang.org/x/exp/slices"
)
const SYSFS_CPUBASE = `/sys/devices/system/cpu`
// Structure holding all information about a hardware thread
// See https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-devices-system-cpu
type HwthreadEntry struct {
// for each CPUx:
CpuID int // CPU / hardware thread ID
SMT int // Simultaneous Multithreading ID
CoreCPUsList []int // CPUs within the same core
Core int // Socket local core ID
Socket int // Sockets (physical) ID
Die int // Die ID
NumaDomain int // NUMA Domain
}
var cache struct {
HwthreadList []int // List of CPU hardware threads
SMTList []int // List of symmetric hyper threading IDs
CoreList []int // List of CPU core IDs
SocketList []int // List of CPU sockets (physical) IDs
DieList []int // List of CPU Die IDs
NumaDomainList []int // List of NUMA Domains
CpuData []HwthreadEntry
}
// fileToInt reads an integer value from a sysfs file
// In case of an error -1 is returned
func fileToInt(path string) int {
buffer, err := os.ReadFile(path)
if err != nil {
log.Print(err)
cclogger.ComponentError("ccTopology", "fileToInt", "Reading", path, ":", err.Error())
return -1
}
stringBuffer := strings.TrimSpace(string(buffer))
id, err := strconv.Atoi(stringBuffer)
if err != nil {
cclogger.ComponentError("ccTopology", "fileToInt", "Parsing", path, ":", stringBuffer, err.Error())
return -1
}
return id
}
// fileToList reads a list from a sysfs file
// A list consists of value ranges separated by colon
// A range can be a single value or a range of values given by a startValue-endValue
// In case of an error nil is returned
func fileToList(path string) []int {
// Read list
buffer, err := os.ReadFile(path)
if err != nil {
log.Print(err)
cclogger.ComponentError("ccTopology", "fileToList", "Reading", path, ":", err.Error())
return nil
}
// Create list
list := make([]int, 0)
stringBuffer := strings.TrimSpace(string(buffer))
for _, valueRangeString := range strings.Split(stringBuffer, ",") {
valueRange := strings.Split(valueRangeString, "-")
switch len(valueRange) {
case 1:
singleValue, err := strconv.Atoi(valueRange[0])
if err != nil {
cclogger.ComponentError("CCTopology", "fileToList", "Parsing", valueRange[0], ":", err.Error())
return nil
}
list = append(list, singleValue)
case 2:
startValue, err := strconv.Atoi(valueRange[0])
if err != nil {
cclogger.ComponentError("CCTopology", "fileToList", "Parsing", valueRange[0], ":", err.Error())
return nil
}
endValue, err := strconv.Atoi(valueRange[1])
if err != nil {
cclogger.ComponentError("CCTopology", "fileToList", "Parsing", valueRange[1], ":", err.Error())
return nil
}
for value := startValue; value <= endValue; value++ {
list = append(list, value)
}
}
}
return list
}
// init initializes the cache structure
func init() {
getHWThreads :=
func() []int {
globPath := filepath.Join(SYSFS_CPUBASE, "cpu[0-9]*")
regexPath := filepath.Join(SYSFS_CPUBASE, "cpu([[:digit:]]+)")
regex := regexp.MustCompile(regexPath)
// File globbing for hardware threads
files, err := filepath.Glob(globPath)
if err != nil {
cclogger.ComponentError("CCTopology", "init:getHWThreads", err.Error())
return nil
}
hwThreadIDs := make([]int, len(files))
for i, file := range files {
// Extract hardware thread ID
matches := regex.FindStringSubmatch(file)
if len(matches) != 2 {
cclogger.ComponentError("CCTopology", "init:getHWThreads: Failed to extract hardware thread ID from ", file)
return nil
}
// Convert hardware thread ID to int
id, err := strconv.Atoi(matches[1])
if err != nil {
cclogger.ComponentError("CCTopology", "init:getHWThreads: Failed to convert to int hardware thread ID ", matches[1])
return nil
}
hwThreadIDs[i] = id
}
// Sort hardware thread IDs
slices.Sort(hwThreadIDs)
return hwThreadIDs
}
getNumaDomain :=
func(basePath string) int {
globPath := filepath.Join(basePath, "node*")
regexPath := filepath.Join(basePath, "node([[:digit:]]+)")
regex := regexp.MustCompile(regexPath)
// File globbing for NUMA node
files, err := filepath.Glob(globPath)
if err != nil {
cclogger.ComponentError("CCTopology", "init:getNumaDomain", err.Error())
return -1
}
// Check, that exactly one NUMA domain was found
if len(files) != 1 {
cclogger.ComponentError("CCTopology", "init:getNumaDomain", "Number of NUMA domains != 1: ", len(files))
return -1
}
// Extract NUMA node ID
matches := regex.FindStringSubmatch(files[0])
if len(matches) != 2 {
cclogger.ComponentError("CCTopology", "init:getNumaDomain", "Failed to extract NUMA node ID from: ", files[0])
return -1
}
id, err := strconv.Atoi(matches[1])
if err != nil {
cclogger.ComponentError("CCTopology", "init:getNumaDomain", "Failed to parse NUMA node ID from: ", matches[1])
return -1
}
return id
}
cache.HwthreadList = getHWThreads()
cache.CoreList = make([]int, len(cache.HwthreadList))
cache.SocketList = make([]int, len(cache.HwthreadList))
cache.DieList = make([]int, len(cache.HwthreadList))
cache.SMTList = make([]int, len(cache.HwthreadList))
cache.NumaDomainList = make([]int, len(cache.HwthreadList))
cache.CpuData = make([]HwthreadEntry, len(cache.HwthreadList))
for i, c := range cache.HwthreadList {
// Set cpuBase directory for topology lookup
cpuBase := filepath.Join(SYSFS_CPUBASE, fmt.Sprintf("cpu%d", c))
topoBase := filepath.Join(cpuBase, "topology")
// Lookup Core ID
cache.CoreList[i] = fileToInt(filepath.Join(topoBase, "core_id"))
// Lookup socket / physical package ID
cache.SocketList[i] = fileToInt(filepath.Join(topoBase, "physical_package_id"))
// Lookup CPU die id
cache.DieList[i] = fileToInt(filepath.Join(topoBase, "die_id"))
if cache.DieList[i] < 0 {
cache.DieList[i] = cache.SocketList[i]
}
// Lookup List of CPUs within the same core
coreCPUsList := fileToList(filepath.Join(topoBase, "core_cpus_list"))
// Find index of CPU ID in List of CPUs within the same core
// if not found return -1
cache.SMTList[i] = slices.Index(coreCPUsList, c)
// Lookup NUMA domain id
cache.NumaDomainList[i] = getNumaDomain(cpuBase)
cache.CpuData[i] =
HwthreadEntry{
CpuID: cache.HwthreadList[i],
SMT: cache.SMTList[i],
CoreCPUsList: coreCPUsList,
Socket: cache.SocketList[i],
NumaDomain: cache.NumaDomainList[i],
Die: cache.DieList[i],
Core: cache.CoreList[i],
}
}
slices.Sort(cache.HwthreadList)
cache.HwthreadList = slices.Compact(cache.HwthreadList)
slices.Sort(cache.SMTList)
cache.SMTList = slices.Compact(cache.SMTList)
slices.Sort(cache.CoreList)
cache.CoreList = slices.Compact(cache.CoreList)
slices.Sort(cache.SocketList)
cache.SocketList = slices.Compact(cache.SocketList)
slices.Sort(cache.DieList)
cache.DieList = slices.Compact(cache.DieList)
slices.Sort(cache.NumaDomainList)
cache.NumaDomainList = slices.Compact(cache.NumaDomainList)
}
// SocketList gets the list of CPU socket IDs
func SocketList() []int {
return slices.Clone(cache.SocketList)
}
// HwthreadList gets the list of hardware thread IDs in the order of listing in /proc/cpuinfo
func HwthreadList() []int {
return slices.Clone(cache.HwthreadList)
}
// Get list of hardware thread IDs in the order of listing in /proc/cpuinfo
// Deprecated! Use HwthreadList()
func CpuList() []int {
return HwthreadList()
}
// CoreList gets the list of CPU core IDs in the order of listing in /proc/cpuinfo
func CoreList() []int {
return slices.Clone(cache.CoreList)
}
// Get list of NUMA node IDs
func NumaNodeList() []int {
return slices.Clone(cache.NumaDomainList)
}
// DieList gets the list of CPU die IDs
func DieList() []int {
if len(cache.DieList) > 0 {
return slices.Clone(cache.DieList)
}
return SocketList()
}
// GetTypeList gets the list of specified type using the naming format inside ClusterCockpit
func GetTypeList(topology_type string) []int {
switch topology_type {
case "node":
return []int{0}
case "socket":
return SocketList()
case "die":
return DieList()
case "memoryDomain":
return NumaNodeList()
case "core":
return CoreList()
case "hwthread":
return HwthreadList()
}
return []int{}
}
func GetTypeId(hwt HwthreadEntry, topology_type string) (int, error) {
var err error = nil
switch topology_type {
case "node":
return 0, err
case "socket":
return hwt.Socket, err
case "die":
return hwt.Die, err
case "memoryDomain":
return hwt.NumaDomain, err
case "core":
return hwt.Core, err
case "hwthread":
return hwt.CpuID, err
}
return -1, fmt.Errorf("unknown topology type '%s'", topology_type)
}
// CpuData returns CPU data for each hardware thread
func CpuData() []HwthreadEntry {
// return a deep copy to protect cache data
c := slices.Clone(cache.CpuData)
for i := range c {
c[i].CoreCPUsList = slices.Clone(cache.CpuData[i].CoreCPUsList)
}
return c
}
// Structure holding basic information about a CPU
type CpuInformation struct {
NumHWthreads int
SMTWidth int
NumSockets int
NumDies int
NumCores int
NumNumaDomains int
}
// CpuInformation reports basic information about the CPU
func CpuInfo() CpuInformation {
return CpuInformation{
NumNumaDomains: len(cache.NumaDomainList),
SMTWidth: len(cache.SMTList),
NumDies: len(cache.DieList),
NumCores: len(cache.CoreList),
NumSockets: len(cache.SocketList),
NumHWthreads: len(cache.HwthreadList),
}
}
// GetHwthreadSocket gets the CPU socket ID for a given hardware thread ID
// In case hardware thread ID is not found -1 is returned
func GetHwthreadSocket(cpuID int) int {
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.CpuID == cpuID {
return d.Socket
}
}
return -1
}
// GetHwthreadNumaDomain gets the NUMA domain ID for a given hardware thread ID
// In case hardware thread ID is not found -1 is returned
func GetHwthreadNumaDomain(cpuID int) int {
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.CpuID == cpuID {
return d.NumaDomain
}
}
return -1
}
// GetHwthreadDie gets the CPU die ID for a given hardware thread ID
// In case hardware thread ID is not found -1 is returned
func GetHwthreadDie(cpuID int) int {
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.CpuID == cpuID {
return d.Die
}
}
return -1
}
// GetHwthreadCore gets the CPU core ID for a given hardware thread ID
// In case hardware thread ID is not found -1 is returned
func GetHwthreadCore(cpuID int) int {
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.CpuID == cpuID {
return d.Core
}
}
return -1
}
// GetSocketHwthreads gets all hardware thread IDs associated with a CPU socket
func GetSocketHwthreads(socket int) []int {
cpuList := make([]int, 0)
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.Socket == socket {
cpuList = append(cpuList, d.CpuID)
}
}
return cpuList
}
// GetNumaDomainHwthreads gets the all hardware thread IDs associated with a NUMA domain
func GetNumaDomainHwthreads(numaDomain int) []int {
cpuList := make([]int, 0)
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.NumaDomain == numaDomain {
cpuList = append(cpuList, d.CpuID)
}
}
return cpuList
}
// GetDieHwthreads gets all hardware thread IDs associated with a CPU die
func GetDieHwthreads(die int) []int {
cpuList := make([]int, 0)
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.Die == die {
cpuList = append(cpuList, d.CpuID)
}
}
return cpuList
}
// GetCoreHwthreads get all hardware thread IDs associated with a CPU core
func GetCoreHwthreads(core int) []int {
cpuList := make([]int, 0)
for i := range cache.CpuData {
d := &cache.CpuData[i]
if d.Core == core {
cpuList = append(cpuList, d.CpuID)
}
}
return cpuList
}
// GetTypeList gets the list of specified type using the naming format inside ClusterCockpit
func GetTypeHwthreads(topology_type string, id int) []int {
switch topology_type {
case "node":
return HwthreadList()
case "socket":
return GetSocketHwthreads(id)
case "die":
return GetDieHwthreads(id)
case "memoryDomain":
return GetNumaDomainHwthreads(id)
case "core":
return GetCoreHwthreads(id)
case "hwthread":
return []int{id}
}
return []int{}
}

125
pkg/hostlist/hostlist.go Normal file
View File

@@ -0,0 +1,125 @@
package hostlist
import (
"fmt"
"regexp"
"sort"
"strconv"
"strings"
)
func Expand(in string) (result []string, err error) {
// Create ranges regular expression
reStNumber := "[[:digit:]]+"
reStRange := reStNumber + "-" + reStNumber
reStOptionalNumberOrRange := "(" + reStNumber + ",|" + reStRange + ",)*"
reStNumberOrRange := "(" + reStNumber + "|" + reStRange + ")"
reStBraceLeft := "[[]"
reStBraceRight := "[]]"
reStRanges := reStBraceLeft +
reStOptionalNumberOrRange +
reStNumberOrRange +
reStBraceRight
reRanges := regexp.MustCompile(reStRanges)
// Create host list regular expression
reStDNSChars := "[a-zA-Z0-9-]+"
reStPrefix := "^(" + reStDNSChars + ")"
reStOptionalSuffix := "(" + reStDNSChars + ")?"
re := regexp.MustCompile(reStPrefix + "([[][0-9,-]+[]])?" + reStOptionalSuffix)
// Remove all delimiters from the input
in = strings.TrimLeft(in, ", ")
for len(in) > 0 {
if v := re.FindStringSubmatch(in); v != nil {
// Remove matched part from the input
lenPrefix := len(v[0])
in = in[lenPrefix:]
// Remove all delimiters from the input
in = strings.TrimLeft(in, ", ")
// matched prefix, range and suffix
hlPrefix := v[1]
hlRanges := v[2]
hlSuffix := v[3]
// Single node without ranges
if hlRanges == "" {
result = append(result, hlPrefix)
continue
}
// Node with ranges
if v := reRanges.FindStringSubmatch(hlRanges); v != nil {
// Remove braces
hlRanges = hlRanges[1 : len(hlRanges)-1]
// Split host ranges at ,
for _, hlRange := range strings.Split(hlRanges, ",") {
// Split host range at -
RangeStartEnd := strings.Split(hlRange, "-")
// Range is only a single number
if len(RangeStartEnd) == 1 {
result = append(result, hlPrefix+RangeStartEnd[0]+hlSuffix)
continue
}
// Range has a start and an end
widthRangeStart := len(RangeStartEnd[0])
widthRangeEnd := len(RangeStartEnd[1])
iStart, _ := strconv.ParseUint(RangeStartEnd[0], 10, 64)
iEnd, _ := strconv.ParseUint(RangeStartEnd[1], 10, 64)
if iStart > iEnd {
return nil, fmt.Errorf("single range start is greater than end: %s", hlRange)
}
// Create print format string for range numbers
doPadding := widthRangeStart == widthRangeEnd
widthPadding := widthRangeStart
var formatString string
if doPadding {
formatString = "%0" + fmt.Sprint(widthPadding) + "d"
} else {
formatString = "%d"
}
formatString = hlPrefix + formatString + hlSuffix
// Add nodes from this range
for i := iStart; i <= iEnd; i++ {
result = append(result, fmt.Sprintf(formatString, i))
}
}
} else {
return nil, fmt.Errorf("not at hostlist range: %s", hlRanges)
}
} else {
return nil, fmt.Errorf("not a hostlist: %s", in)
}
}
if result != nil {
// sort
sort.Strings(result)
// uniq
previous := 1
for current := 1; current < len(result); current++ {
if result[current-1] != result[current] {
if previous != current {
result[previous] = result[current]
}
previous++
}
}
result = result[:previous]
}
return
}

View File

@@ -0,0 +1,126 @@
package hostlist
import (
"testing"
)
func TestExpand(t *testing.T) {
// Compare two slices of strings
equal := func(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i, v := range a {
if v != b[i] {
return false
}
}
return true
}
type testDefinition struct {
input string
resultExpected []string
errorExpected bool
}
expandTests := []testDefinition{
{
// Single node
input: "n1",
resultExpected: []string{"n1"},
errorExpected: false,
},
{
// Single node, duplicated
input: "n1,n1",
resultExpected: []string{"n1"},
errorExpected: false,
},
{
// Single node with padding
input: "n[01]",
resultExpected: []string{"n01"},
errorExpected: false,
},
{
// Single node with suffix
input: "n[01]-p",
resultExpected: []string{"n01-p"},
errorExpected: false,
},
{
// Multiple nodes with a single range
input: "n[1-2]",
resultExpected: []string{"n1", "n2"},
errorExpected: false,
},
{
// Multiple nodes with a single range and a single index
input: "n[1-2,3]",
resultExpected: []string{"n1", "n2", "n3"},
errorExpected: false,
},
{
// Multiple nodes with different prefixes
input: "n[1-2],m[1,2]",
resultExpected: []string{"m1", "m2", "n1", "n2"},
errorExpected: false,
},
{
// Multiple nodes with different suffixes
input: "n[1-2]-p,n[1,2]-q",
resultExpected: []string{"n1-p", "n1-q", "n2-p", "n2-q"},
errorExpected: false,
},
{
// Multiple nodes with and without node ranges
input: " n09, n[01-04,06-07,09] , , n10,n04",
resultExpected: []string{"n01", "n02", "n03", "n04", "n06", "n07", "n09", "n10"},
errorExpected: false,
},
{
// Forbidden DNS character
input: "n@",
resultExpected: []string{},
errorExpected: true,
},
{
// Forbidden range
input: "n[1-2-2,3]",
resultExpected: []string{},
errorExpected: true,
},
{
// Forbidden range limits
input: "n[2-1]",
resultExpected: []string{},
errorExpected: true,
},
}
for _, expandTest := range expandTests {
result, err := Expand(expandTest.input)
hasError := err != nil
if hasError != expandTest.errorExpected && hasError {
t.Errorf("Expand('%s') failed: unexpected error '%v'",
expandTest.input, err)
continue
}
if hasError != expandTest.errorExpected && !hasError {
t.Errorf("Expand('%s') did not fail as expected: got result '%+v'",
expandTest.input, result)
continue
}
if !hasError && !equal(result, expandTest.resultExpected) {
t.Errorf("Expand('%s') failed: got result '%+v', expected result '%v'",
expandTest.input, result, expandTest.resultExpected)
continue
}
t.Logf("Checked hostlist.Expand('%s'): result = '%+v', err = '%v'",
expandTest.input, result, err)
}
}

View File

@@ -3,7 +3,7 @@ package multiChanTicker
import (
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
cclog "github.com/ClusterCockpit/cc-lib/ccLogger"
)
type multiChanTicker struct {

View File

@@ -1,8 +1,44 @@
{
"natsrecv" : {
"natsrecv": {
"type": "nats",
"address": "nats://my-url",
"port" : "4222",
"port": "4222",
"database": "testcluster"
},
"redfish_recv": {
"type": "redfish",
"endpoint": "https://%h-bmc",
"client_config": [
{
"host_list": "my-host-1-[1-2]",
"username": "username-1",
"password": "password-1"
},
{
"host_list": "my-host-2-[1,2]",
"username": "username-2",
"password": "password-2"
}
]
},
"ipmi_recv": {
"type": "ipmi",
"endpoint": "ipmi-sensors://%h-ipmi",
"exclude_metrics": [
"fan_speed",
"voltage"
],
"client_config": [
{
"username": "username-1",
"password": "password-1",
"host_list": "my-host-1-[1-2]"
},
{
"username": "username-2",
"password": "password-2",
"host_list": "my-host-2-[1,2]"
}
]
}
}
}

View File

@@ -1,44 +0,0 @@
# CCMetric receivers
This folder contains the ReceiveManager and receiver implementations for the cc-metric-collector.
# Configuration
The configuration file for the receivers is a list of configurations. The `type` field in each specifies which receiver to initialize.
```json
[
{
"type": "nats",
"address": "nats://my-url",
"port" : "4222",
"database": "testcluster"
}
]
```
## Type `nats`
```json
{
"type": "nats",
"address": "<nats-URI or hostname>",
"port" : "<portnumber>",
"database": "<subscribe topic>"
}
```
The `nats` receiver subscribes to the topic `database` and listens on `address` and `port` for metrics in the InfluxDB line protocol.
# Contributing own receivers
A receiver contains three functions and is derived from the type `Receiver` (in `metricReceiver.go`):
* `Init(config ReceiverConfig) error`
* `Start() error`
* `Close()`
* `Name() string`
* `SetSink(sink chan ccMetric.CCMetric)`
The data structures should be set up in `Init()` like opening a file or server connection. The `Start()` function should either start a go routine or issue some other asynchronous mechanism for receiving metrics. The `Close()` function should tear down anything created in `Init()`.
Finally, the receiver needs to be registered in the `receiveManager.go`. There is a list of receivers called `AvailableReceivers` which is a map (`receiver_type_string` -> `pointer to Receiver interface`). Add a new entry with a descriptive name and the new receiver.

View File

@@ -1,42 +0,0 @@
package receivers
import (
// "time"
"encoding/json"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
)
type defaultReceiverConfig struct {
Type string `json:"type"`
}
type ReceiverConfig struct {
Addr string `json:"address"`
Port string `json:"port"`
Database string `json:"database"`
Organization string `json:"organization,omitempty"`
Type string `json:"type"`
}
type receiver struct {
typename string
name string
sink chan lp.CCMetric
}
type Receiver interface {
Init(name string, config json.RawMessage) error
Start()
Close()
Name() string
SetSink(sink chan lp.CCMetric)
}
func (r *receiver) Name() string {
return r.name
}
func (r *receiver) SetSink(sink chan lp.CCMetric) {
r.sink = sink
}

View File

@@ -1,93 +0,0 @@
package receivers
import (
"encoding/json"
"errors"
"fmt"
"time"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
influx "github.com/influxdata/line-protocol"
nats "github.com/nats-io/nats.go"
)
type NatsReceiverConfig struct {
Type string `json:"type"`
Addr string `json:"address"`
Port string `json:"port"`
Subject string `json:"subject"`
}
type NatsReceiver struct {
receiver
nc *nats.Conn
handler *influx.MetricHandler
parser *influx.Parser
meta map[string]string
config NatsReceiverConfig
}
var DefaultTime = func() time.Time {
return time.Unix(42, 0)
}
func (r *NatsReceiver) Init(name string, config json.RawMessage) error {
r.typename = "NatsReceiver"
r.name = name
r.config.Addr = nats.DefaultURL
r.config.Port = "4222"
if len(config) > 0 {
err := json.Unmarshal(config, &r.config)
if err != nil {
cclog.ComponentError(r.name, "Error reading config:", err.Error())
return err
}
}
if len(r.config.Addr) == 0 ||
len(r.config.Port) == 0 ||
len(r.config.Subject) == 0 {
return errors.New("not all configuration variables set required by NatsReceiver")
}
r.meta = map[string]string{"source": r.name}
uri := fmt.Sprintf("%s:%s", r.config.Addr, r.config.Port)
cclog.ComponentDebug(r.name, "INIT", uri, "Subject", r.config.Subject)
nc, err := nats.Connect(uri)
if err == nil {
r.nc = nc
} else {
r.nc = nil
return err
}
r.handler = influx.NewMetricHandler()
r.parser = influx.NewParser(r.handler)
r.parser.SetTimeFunc(DefaultTime)
return err
}
func (r *NatsReceiver) Start() {
cclog.ComponentDebug(r.name, "START")
r.nc.Subscribe(r.config.Subject, r._NatsReceive)
}
func (r *NatsReceiver) _NatsReceive(m *nats.Msg) {
metrics, err := r.parser.Parse(m.Data)
if err == nil {
for _, m := range metrics {
y := lp.FromInfluxMetric(m)
for k, v := range r.meta {
y.AddMeta(k, v)
}
if r.sink != nil {
r.sink <- y
}
}
}
}
func (r *NatsReceiver) Close() {
if r.nc != nil {
cclog.ComponentDebug(r.name, "CLOSE")
r.nc.Close()
}
}

View File

@@ -1,113 +0,0 @@
package receivers
import (
"encoding/json"
"os"
"sync"
cclog "github.com/ClusterCockpit/cc-metric-collector/internal/ccLogger"
lp "github.com/ClusterCockpit/cc-metric-collector/internal/ccMetric"
)
var AvailableReceivers = map[string]Receiver{
"nats": &NatsReceiver{},
}
type receiveManager struct {
inputs []Receiver
output chan lp.CCMetric
done chan bool
wg *sync.WaitGroup
config []json.RawMessage
}
type ReceiveManager interface {
Init(wg *sync.WaitGroup, receiverConfigFile string) error
AddInput(name string, rawConfig json.RawMessage) error
AddOutput(output chan lp.CCMetric)
Start()
Close()
}
func (rm *receiveManager) Init(wg *sync.WaitGroup, receiverConfigFile string) error {
rm.inputs = make([]Receiver, 0)
rm.output = nil
rm.done = make(chan bool)
rm.wg = wg
rm.config = make([]json.RawMessage, 0)
configFile, err := os.Open(receiverConfigFile)
if err != nil {
cclog.ComponentError("ReceiveManager", err.Error())
return err
}
defer configFile.Close()
jsonParser := json.NewDecoder(configFile)
var rawConfigs map[string]json.RawMessage
err = jsonParser.Decode(&rawConfigs)
if err != nil {
cclog.ComponentError("ReceiveManager", err.Error())
return err
}
for name, raw := range rawConfigs {
rm.AddInput(name, raw)
}
return nil
}
func (rm *receiveManager) Start() {
rm.wg.Add(1)
for _, r := range rm.inputs {
cclog.ComponentDebug("ReceiveManager", "START", r.Name())
r.Start()
}
cclog.ComponentDebug("ReceiveManager", "STARTED")
}
func (rm *receiveManager) AddInput(name string, rawConfig json.RawMessage) error {
var config defaultReceiverConfig
err := json.Unmarshal(rawConfig, &config)
if err != nil {
cclog.ComponentError("ReceiveManager", "SKIP", config.Type, "JSON config error:", err.Error())
return err
}
if _, found := AvailableReceivers[config.Type]; !found {
cclog.ComponentError("ReceiveManager", "SKIP", config.Type, "unknown receiver:", err.Error())
return err
}
r := AvailableReceivers[config.Type]
err = r.Init(name, rawConfig)
if err != nil {
cclog.ComponentError("ReceiveManager", "SKIP", r.Name(), "initialization failed:", err.Error())
return err
}
rm.inputs = append(rm.inputs, r)
rm.config = append(rm.config, rawConfig)
cclog.ComponentDebug("ReceiveManager", "ADD RECEIVER", r.Name())
return nil
}
func (rm *receiveManager) AddOutput(output chan lp.CCMetric) {
rm.output = output
for _, r := range rm.inputs {
r.SetSink(rm.output)
}
}
func (rm *receiveManager) Close() {
for _, r := range rm.inputs {
cclog.ComponentDebug("ReceiveManager", "CLOSE", r.Name())
r.Close()
}
rm.wg.Done()
cclog.ComponentDebug("ReceiveManager", "CLOSE")
}
func New(wg *sync.WaitGroup, receiverConfigFile string) (ReceiveManager, error) {
r := &receiveManager{}
err := r.Init(wg, receiverConfigFile)
if err != nil {
return nil, err
}
return r, err
}

View File

@@ -1,22 +1,23 @@
{
"add_tags" : [
{
"key" : "cluster",
"value" : "testcluster",
"if" : "*"
},
{
"key" : "test",
"value" : "testing",
"if" : "name == 'temp_package_id_0'"
}
],
"delete_tags" : [
{
"key" : "unit",
"value" : "*",
"if" : "*"
}
],
"process_messages" : {
"add_tag_if": [
{
"key" : "cluster",
"value" : "testcluster",
"if" : "true"
},
{
"key" : "test",
"value" : "testing",
"if" : "name == 'temp_package_id_0'"
}
],
"delete_tag_if": [
{
"key" : "unit",
"if" : "true"
}
]
},
"interval_timestamp" : true
}

View File

@@ -6,7 +6,7 @@ CC_HOME=/tmp
LOG_DIR=/var/log
DATA_DIR=/var/lib/grafana
DATA_DIR=/var/lib/cc-metric-collector
MAX_OPEN_FILES=10000
@@ -15,3 +15,9 @@ CONF_DIR=/etc/cc-metric-collector
CONF_FILE=/etc/cc-metric-collector/cc-metric-collector.json
RESTART_ON_UPGRADE=true
# Golang runtime debugging. (see: https://pkg.go.dev/runtime)
# GODEBUG=gctrace=1
# Golang garbage collection target percentage
# GOGC=100

View File

@@ -0,0 +1,12 @@
Package: cc-metric-collector
Version: {VERSION}
Installed-Size: {INSTALLED_SIZE}
Architecture: {ARCH}
Maintainer: thomas.gruber@fau.de
Depends: libc6 (>= 2.2.1)
Build-Depends: debhelper-compat (= 13), git, golang-go
Description: Metric collection daemon from the ClusterCockpit suite
Homepage: https://github.com/ClusterCockpit/cc-metric-collector
Source: cc-metric-collector
Rules-Requires-Root: no

View File

@@ -0,0 +1,16 @@
#!/usr/bin/make -f
# You must remove unused comment lines for the released package.
#export DH_VERBOSE = 1
#export DEB_BUILD_MAINT_OPTIONS = hardening=+all
#export DEB_CFLAGS_MAINT_APPEND = -Wall -pedantic
#export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
%:
dh $@
override_dh_auto_build:
make
override_dh_auto_install:
make PREFIX=/usr install

View File

@@ -19,13 +19,13 @@
PATH=/bin:/usr/bin:/sbin:/usr/sbin
NAME=cc-metric-collector
DESC="ClusterCockpit metric collector"
DEFAULT=/etc/default/${NAME}.json
DEFAULT=/etc/default/${NAME}
CC_USER=clustercockpit
CC_GROUP=clustercockpit
CONF_DIR=/etc/cc-metric-collector
PID_FILE=/var/run/$NAME.pid
DAEMON=/usr/sbin/$NAME
DAEMON=/usr/bin/$NAME
CONF_FILE=${CONF_DIR}/cc-metric-collector.json
umask 0027
@@ -75,7 +75,7 @@ case "$1" in
fi
# Start Daemon
start-stop-daemon --start -b --chdir "$WORK_DIR" --user "$CC_USER" -c "$CC_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
start-stop-daemon --start -b --chdir "$WORK_DIR" --user "$CC_USER" -c "$CC_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $CC_OPTS
return=$?
if [ $return -eq 0 ]
then

View File

@@ -3,7 +3,6 @@ Description=ClusterCockpit metric collector
Documentation=https://github.com/ClusterCockpit/cc-metric-collector
Wants=network-online.target
After=network-online.target
After=postgresql.service mariadb.service mysql.service
[Service]
EnvironmentFile=/etc/default/cc-metric-collector
@@ -14,7 +13,7 @@ Restart=on-failure
WorkingDirectory=/tmp
RuntimeDirectory=cc-metric-collector
RuntimeDirectoryMode=0750
ExecStart=/usr/sbin/cc-metric-collector --config=${CONF_FILE}
ExecStart=/usr/bin/cc-metric-collector --config=${CONF_FILE}
LimitNOFILE=10000
TimeoutStopSec=20
UMask=0027

View File

@@ -1,5 +1,5 @@
Name: cc-metric-collector
Version: 0.2
Version: %{VERS}
Release: 1%{?dist}
Summary: Metric collection daemon from the ClusterCockpit suite
@@ -7,8 +7,11 @@ License: MIT
Source0: %{name}-%{version}.tar.gz
BuildRequires: go-toolset
# for internal LIKWID installation
BuildRequires: wget perl-Data-Dumper
BuildRequires: systemd-rpm-macros
# for header downloads
BuildRequires: wget
# Recommended when using the sysusers_create_package macro
Requires(pre): /usr/bin/systemd-sysusers
Provides: %{name} = %{version}
@@ -26,7 +29,7 @@ make
%install
install -Dpm 0750 %{name} %{buildroot}%{_sbindir}/%{name}
install -Dpm 0750 %{name} %{buildroot}%{_bindir}/%{name}
install -Dpm 0600 config.json %{buildroot}%{_sysconfdir}/%{name}/%{name}.json
install -Dpm 0600 collectors.json %{buildroot}%{_sysconfdir}/%{name}/collectors.json
install -Dpm 0600 sinks.json %{buildroot}%{_sysconfdir}/%{name}/sinks.json
@@ -34,11 +37,15 @@ install -Dpm 0600 receivers.json %{buildroot}%{_sysconfdir}/%{name}/receivers.js
install -Dpm 0600 router.json %{buildroot}%{_sysconfdir}/%{name}/router.json
install -Dpm 0644 scripts/%{name}.service %{buildroot}%{_unitdir}/%{name}.service
install -Dpm 0600 scripts/%{name}.config %{buildroot}%{_sysconfdir}/default/%{name}
install -Dpm 0644 scripts/%{name}.sysusers %{buildroot}%{_sysusersdir}/%{name}.conf
%check
# go test should be here... :)
%pre
%sysusers_create_package %{name} scripts/%{name}.sysusers
%post
%systemd_post %{name}.service
@@ -46,17 +53,23 @@ install -Dpm 0600 scripts/%{name}.config %{buildroot}%{_sysconfdir}/default/%{na
%systemd_preun %{name}.service
%files
# Binary
%attr(-,clustercockpit,clustercockpit) %{_bindir}/%{name}
# Config
%dir %{_sysconfdir}/%{name}
%{_sbindir}/%{name}
%attr(0600,clustercockpit,clustercockpit) %config(noreplace) %{_sysconfdir}/%{name}/%{name}.json
%attr(0600,clustercockpit,clustercockpit) %config(noreplace) %{_sysconfdir}/%{name}/collectors.json
%attr(0600,clustercockpit,clustercockpit) %config(noreplace) %{_sysconfdir}/%{name}/sinks.json
%attr(0600,clustercockpit,clustercockpit) %config(noreplace) %{_sysconfdir}/%{name}/receivers.json
%attr(0600,clustercockpit,clustercockpit) %config(noreplace) %{_sysconfdir}/%{name}/router.json
# Systemd
%{_unitdir}/%{name}.service
%{_sysconfdir}/default/%{name}
%attr(0600,root,root) %config(noreplace) %{_sysconfdir}/%{name}/%{name}.json
%attr(0600,root,root) %config(noreplace) %{_sysconfdir}/%{name}/collectors.json
%attr(0600,root,root) %config(noreplace) %{_sysconfdir}/%{name}/sinks.json
%attr(0600,root,root) %config(noreplace) %{_sysconfdir}/%{name}/receivers.json
%attr(0600,root,root) %config(noreplace) %{_sysconfdir}/%{name}/router.json
%{_sysusersdir}/%{name}.conf
%changelog
* Thu Mar 03 2022 Thomas Gruber - 0.3
- Add clustercockpit user installation
* Mon Feb 14 2022 Thomas Gruber - 0.2
- Add component specific configuration files
- Add %attr to config files

View File

@@ -0,0 +1,2 @@
#Type Name ID GECOS Home directory Shell
u clustercockpit - "User for ClusterCockpit" /run/cc-metric-collector /sbin/nologin

175
scripts/generate_docs.sh Executable file
View File

@@ -0,0 +1,175 @@
#!/bin/bash -l
SRCDIR="$(pwd)"
DESTDIR="$1"
if [ -z "$DESTDIR" ]; then
echo "Destination folder not provided"
exit 1
fi
COLLECTORS=$(find "${SRCDIR}/collectors" -name "*Metric.md")
SINKS=$(find "${SRCDIR}/sinks" -name "*Sink.md")
RECEIVERS=$(find "${SRCDIR}/receivers" -name "*Receiver.md")
# Collectors
mkdir -p "${DESTDIR}/collectors"
for F in $COLLECTORS; do
echo "$F"
FNAME=$(basename "$F")
TITLE=$(grep -E "^##" "$F" | head -n 1 | sed -e 's+## ++g')
echo "'${TITLE//\`/}'"
if [ "${TITLE}" == "" ]; then continue; fi
rm --force "${DESTDIR}/collectors/${FNAME}"
cat << EOF >> "${DESTDIR}/collectors/${FNAME}"
---
title: ${TITLE//\`/}
description: >
Toplevel ${FNAME/.md/}
categories: [cc-metric-collector]
tags: [cc-metric-collector, Collector, ${FNAME/Metric.md/}]
weight: 2
---
EOF
cat "$F" >> "${DESTDIR}/collectors/${FNAME}"
done
if [ -e "${SRCDIR}/collectors/README.md" ]; then
cat << EOF > "${DESTDIR}/collectors/_index.md"
---
title: cc-metric-collector's collectors
description: Documentation of cc-metric-collector's collectors
categories: [cc-metric-collector]
tags: [cc-metric-collector, Collector, General]
weight: 40
---
EOF
cat "${SRCDIR}/collectors/README.md" >> "${DESTDIR}/collectors/_index.md"
fi
# Sinks
mkdir -p "${DESTDIR}/sinks"
for F in $SINKS; do
echo "$F"
FNAME=$(basename "$F")
TITLE=$(grep -E "^##" "$F" | head -n 1 | sed -e 's+## ++g')
echo "'${TITLE//\`/}'"
if [ "${TITLE}" == "" ]; then continue; fi
rm --force "${DESTDIR}/sinks/${FNAME}"
cat << EOF >> "${DESTDIR}/sinks/${FNAME}"
---
title: ${TITLE//\`/}
description: >
Toplevel ${FNAME/.md/}
categories: [cc-metric-collector]
tags: [cc-metric-collector, Sink, ${FNAME/Sink.md/}]
weight: 2
---
EOF
cat "$F" >> "${DESTDIR}/sinks/${FNAME}"
done
if [ -e "${SRCDIR}/collectors/README.md" ]; then
cat << EOF > "${DESTDIR}/sinks/_index.md"
---
title: cc-metric-collector's sinks
description: Documentation of cc-metric-collector's sinks
categories: [cc-metric-collector]
tags: [cc-metric-collector, Sink, General]
weight: 40
---
EOF
cat "${SRCDIR}/sinks/README.md" >> "${DESTDIR}/sinks/_index.md"
fi
# Receivers
mkdir -p "${DESTDIR}/receivers"
for F in $RECEIVERS; do
echo "$F"
FNAME=$(basename "$F")
TITLE=$(grep -E "^##" "$F" | head -n 1 | sed -e 's+## ++g')
echo "'${TITLE//\`/}'"
if [ "${TITLE}" == "" ]; then continue; fi
rm --force "${DESTDIR}/receivers/${FNAME}"
cat << EOF >> "${DESTDIR}/receivers/${FNAME}"
---
title: ${TITLE//\`/}
description: >
Toplevel ${FNAME/.md/}
categories: [cc-metric-collector]
tags: [cc-metric-collector, Receiver, ${FNAME/Receiver.md/}]
weight: 2
---
EOF
cat "$F" >> "${DESTDIR}/receivers/${FNAME}"
done
if [ -e "${SRCDIR}/receivers/README.md" ]; then
cat << EOF > "${DESTDIR}/receivers/_index.md"
---
title: cc-metric-collector's receivers
description: Documentation of cc-metric-collector's receivers
categories: [cc-metric-collector]
tags: [cc-metric-collector, Receiver, General]
weight: 40
---
EOF
cat "${SRCDIR}/receivers/README.md" >> "${DESTDIR}/receivers/_index.md"
fi
mkdir -p "${DESTDIR}/internal/metricRouter"
if [ -e "${SRCDIR}/internal/metricRouter/README.md" ]; then
cat << EOF > "${DESTDIR}/internal/metricRouter/_index.md"
---
title: cc-metric-collector's router
description: Documentation of cc-metric-collector's router
categories: [cc-metric-collector]
tags: [cc-metric-collector, Router, General]
weight: 40
---
EOF
cat "${SRCDIR}/internal/metricRouter/README.md" >> "${DESTDIR}/internal/metricRouter/_index.md"
fi
if [ -e "${SRCDIR}/README.md" ]; then
cat << EOF > "${DESTDIR}/_index.md"
---
title: cc-metric-collector
description: Documentation of cc-metric-collector
categories: [cc-metric-collector]
tags: [cc-metric-collector, General]
weight: 40
---
EOF
cat "${SRCDIR}/README.md" >> "${DESTDIR}/_index.md"
sed -i -e 's+README.md+_index.md+g' "${DESTDIR}/_index.md"
fi
mkdir -p "${DESTDIR}/pkg/messageProcessor"
if [ -e "${SRCDIR}/pkg/messageProcessor/README.md" ]; then
cat << EOF > "${DESTDIR}/pkg/messageProcessor/_index.md"
---
title: cc-metric-collector's message processor
description: Documentation of cc-metric-collector's message processor
categories: [cc-metric-collector]
tags: [cc-metric-collector, Message Processor]
weight: 40
---
EOF
cat "${SRCDIR}/pkg/messageProcessor/README.md" >> "${DESTDIR}/pkg/messageProcessor/_index.md"
fi

View File

@@ -45,7 +45,7 @@ def group_to_json(groupfile):
if "PWR" in calc:
scope = "socket"
m = {"name" : metric, "calc": calc, "scope" : scope, "publish" : True}
m = {"name" : metric, "calc": calc, "type" : scope, "publish" : True}
metrics.append(m)
return {"events" : events, "metrics" : metrics}

View File

@@ -1,6 +1,8 @@
{
"mystdout" : {
"type" : "stdout",
"meta_as_tags" : true
"mystdout": {
"type": "stdout",
"meta_as_tags": [
"unit"
]
}
}
}

Some files were not shown because too many files have changed in this diff Show More