Compare commits

..

71 Commits

Author SHA1 Message Date
Jan Eitzinger
08e323ba51
Merge pull request #390 from ClusterCockpit/dependabot/go_modules/golang.org/x/net-0.38.0
Bump golang.org/x/net from 0.36.0 to 0.38.0
2025-05-13 14:12:44 +02:00
dependabot[bot]
9f50f36b1d
Bump golang.org/x/net from 0.36.0 to 0.38.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 12:10:40 +00:00
Jan Eitzinger
f65e122f8d
Merge pull request #386 from ClusterCockpit/hotfix
Prepare re-release for v1.4.4
2025-04-28 10:18:44 +02:00
161f0744aa fix: enforce apiAllowedIPs config option
Fixes #385
2025-04-28 09:54:22 +02:00
95de9ad3b3 Merge branch 'hotfix' of github.com:ClusterCockpit/cc-backend into hotfix 2025-04-28 08:52:27 +02:00
Jan Eitzinger
d5c170055f
Merge pull request #384 from brinkcoder/fix/auth-log-iperr
[BUGFIX] correct wrong variable in AuthApi error logging
2025-04-28 08:51:42 +02:00
brinkcoder
61f0521072 fix: correct logging variable from err to ipErr in AuthApi 2025-04-25 22:37:16 +02:00
Christoph Kluge
6ca14c55f2 fix: fix error in jobsMetricStatisticsHistogram calculation
- also reduces overhead, simplifies query
2025-04-25 18:09:21 +02:00
Jan Eitzinger
1309d09aee
Merge pull request #383 from ClusterCockpit/hotfix
Remove websocket sse GraphQL support
2025-04-24 12:59:34 +02:00
aba75b3a19 Remove websocket sse GraphQL support 2025-04-24 12:57:37 +02:00
Jan Eitzinger
e87481d8db
Merge pull request #382 from ClusterCockpit/hotfix
Prepare Bugfix Release 1.4.4
2025-04-24 11:46:25 +02:00
acaad69917 Prepare Bugfix Release 1.4.4 2025-04-24 11:42:34 +02:00
Jan Eitzinger
ff588ad57a
Merge pull request #381 from ClusterCockpit/dev
Dev
2025-04-24 11:18:55 +02:00
65df27154c Cleanup and regenerate Swagger docs 2025-04-24 11:14:51 +02:00
8dfa1957f4 Merge hotfix changes 2025-04-24 11:07:02 +02:00
570eba3794 Cleanup Swagger docs 2025-04-24 11:01:13 +02:00
94a39fc61f Readd tag endpoints 2025-04-24 10:53:55 +02:00
2d359e5f99 Merge rest.go 2025-04-24 10:40:03 +02:00
Jan Eitzinger
04692e0c44
Merge pull request #379 from ClusterCockpit/add_tag_delete
Add Tag Deletion: API and Frontend
2025-04-24 10:09:51 +02:00
Jan Eitzinger
809fd23b88
Merge pull request #380 from ClusterCockpit/review_api_auth
Review api auth
2025-04-24 10:08:18 +02:00
Christoph Kluge
e3653daea3 reduce code in tag svelte view 2025-04-23 17:59:26 +02:00
Christoph Kluge
48fa75386c feat: add tag removal api endpoints 2025-04-23 16:12:56 +02:00
Christoph Kluge
1b3a12a4dc feat: add remove functionality to tag view, add confirm alert 2025-04-23 15:01:12 +02:00
Christoph Kluge
543ddf540e implement removeTagFromList mutation, add tag mutation access checks 2025-04-23 14:51:01 +02:00
Christoph Kluge
a3fb471546 adapt and improve svelte taglist component 2025-04-22 17:33:17 +02:00
Christoph Kluge
277f964b30 move taglist a from go tmpl to svelte component 2025-04-22 13:47:25 +02:00
Christoph Kluge
9bcf7adb67 add api calls for removing tags, initial branch commit 2025-04-17 17:31:59 +02:00
Christoph Kluge
f343fa0071 fix: add name scrambling demo mode to all views
- was missing for analysis, status and nodelist
2025-04-17 11:15:35 +02:00
Christoph Kluge
e5862e9218 Merge branch 'dev' of https://github.com/ClusterCockpit/cc-backend into dev 2025-04-16 18:36:15 +02:00
Christoph Kluge
29ae2423f8 fix metricconfig pointer copy, add disabled metric card in jobView
- skips disabled metrics in backend, see cc-backend tries to retrieve "removed" metrics #377
2025-04-16 18:36:12 +02:00
Christoph Kluge
1755a4a7df remove separate userapiallowedips config and check 2025-04-14 11:58:42 +02:00
Christoph Kluge
25d3325049 add getUsers to admin REST api 2025-04-14 11:36:03 +02:00
Christoph Kluge
fb6a4c3b87 review and move api endpoints secured check 2025-04-09 16:00:27 +02:00
317f80a984 fix: Replace deprecated gqlgen NewDefaultServer call 2025-04-09 09:40:52 +02:00
28cdc1d9e5 fix: Update endpoints in Swagger UI 2025-04-09 09:13:21 +02:00
c2087b15d5 Merge branch 'dev' of github.com:ClusterCockpit/cc-backend into dev 2025-04-09 07:28:02 +02:00
a8d785beb3 Remove redundant check in auth package 2025-04-09 07:27:59 +02:00
Christoph Kluge
a6784b5549 fix: reintroduce statstable id natural sort order
- see Use natural sort order for IDs in statistics tables #369
2025-04-08 16:00:07 +02:00
Christoph Kluge
d770292be8 feat: add nodename matcher select to filter, defaults to equal match
- see PR !353
2025-04-08 14:52:07 +02:00
Christoph Kluge
b3a1037ade
Merge pull request #353 from brinkcoder/fix-node-filter
Fix node filter to use EXISTS for exact hostname matches
2025-04-08 12:57:04 +02:00
Christoph Kluge
02946cf0b4 fix: fix nodelist filter result displaying wrong information
- missing svelte iteration key added
2025-04-07 17:03:23 +02:00
Christoph Kluge
cf051d5108
Merge pull request #375 from ClusterCockpit/master
Dependabot Update Dev Branch
2025-04-07 16:09:31 +02:00
Christoph Kluge
96977c6183
Merge pull request #374 from ClusterCockpit/review_logging
Review logging
2025-04-07 16:03:48 +02:00
Jan Eitzinger
73d83164fc
Merge pull request #373 from ClusterCockpit/dependabot/go_modules/golang.org/x/net-0.36.0
Bump golang.org/x/net from 0.35.0 to 0.36.0
2025-04-04 11:05:01 +02:00
dependabot[bot]
1064f5e4a8
Bump golang.org/x/net from 0.35.0 to 0.36.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.35.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.36.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-04 09:01:59 +00:00
Jan Eitzinger
5be98c7087
Merge pull request #372 from ClusterCockpit/dependabot/npm_and_yarn/web/frontend/babel/runtime-7.27.0
Bump @babel/runtime from 7.26.0 to 7.27.0 in /web/frontend
2025-04-04 10:55:34 +02:00
dependabot[bot]
0d689c7dff
Bump @babel/runtime from 7.26.0 to 7.27.0 in /web/frontend
Bumps [@babel/runtime](https://github.com/babel/babel/tree/HEAD/packages/babel-runtime) from 7.26.0 to 7.27.0.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.27.0/packages/babel-runtime)

---
updated-dependencies:
- dependency-name: "@babel/runtime"
  dependency-version: 7.27.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-04 08:45:16 +00:00
Jan Eitzinger
1f24ed46a0
Merge pull request #371 from ClusterCockpit/dependabot/go_modules/github.com/golang-jwt/jwt/v5-5.2.2
Bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2
2025-04-04 10:37:18 +02:00
dependabot[bot]
92b4159f9e
Bump github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v5
  dependency-version: 5.2.2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-04 08:35:15 +00:00
Jan Eitzinger
5817b41e29
Merge pull request #368 from ClusterCockpit/dev
Dev
2025-03-20 13:02:23 +01:00
d6b132e3a6 Merge branch 'master' into dev 2025-03-20 12:51:23 +01:00
Jan Eitzinger
318f70f34c
Merge pull request #365 from ClusterCockpit/split_statsTable_query
Split StatsTable DataQuery from JobMetrics Query In Job-View
2025-03-20 12:50:23 +01:00
Jan Eitzinger
e41525d40a
Merge pull request #366 from ClusterCockpit/hotfix
fix: always return hasNextPage boolean to frontend
2025-03-20 12:49:57 +01:00
Jan Eitzinger
a102220e52
Merge pull request #367 from ClusterCockpit/makefile-fix
Fix 'make -B', don't fail if $(VAR) already exists
2025-03-20 12:47:16 +01:00
Christoph Kluge
e9a214c5b2 fix: add nullSafe condition to monitoringStatus display on metric queryError 2025-03-19 14:57:27 +01:00
Christoph Kluge
c53f5eb144 fix: always return hasNextPage boolean to frontend
- removes dependency on uiDefaults setting
2025-03-18 18:01:37 +01:00
Christoph Kluge
9ed64e0388 Review logging, comment cleanup 2025-03-17 17:39:17 +01:00
Christoph Kluge
93040d4629 IMplement LoadNode Data, LoadNodeListData, LoadScopedStats for influxDB2 backend
- Untested
- Only Node Scope
2025-03-17 15:25:33 +01:00
Christoph Kluge
0144ad43f5 Implement NodeListData and ScopedStats for Prometheus Backend 2025-03-17 11:03:51 +01:00
Christoph Kluge
8da2fc30c3 split statsTable data from jobMetrics query, frontend refactor 2025-03-14 16:36:31 +01:00
0e27ae7795 Merge branch 'dev' of github.com:ClusterCockpit/cc-backend into dev 2025-03-14 10:52:39 +01:00
33c6cdb9fe Update test workflow 2025-03-14 10:52:27 +01:00
Christoph Kluge
f5f36427a4 split statsTable data from jobMetrics query, initial commit
- mainly backend changes
- statstable changes only for prototyping
2025-03-13 17:33:55 +01:00
exterr2f
16db9bd1a2 Fix node filter: Use EXISTS with Eq for exact match and LIKE for Contains 2025-03-11 12:20:13 +01:00
Christoph Kluge
c964f09a4f Merge branch 'dev' into review_logging 2025-02-28 17:19:00 +01:00
Christoph Kluge
bd0cc69668 Review fatalf log calls and messages 2025-02-27 18:10:04 +01:00
Christoph Kluge
84fffac264 Merge branch 'dev' into review_logging 2025-02-27 15:20:46 +01:00
Christoph Kluge
fc0c76bd77 Apply new log funtion to init and main, review or add logtexts 2025-02-26 15:20:25 +01:00
Christoph Kluge
d209547968 Remove dedicated fatal loglevel, change to Fprintln for unformatted 2025-02-26 14:40:54 +01:00
Christoph Kluge
0191bc3821 Annotate and review log functions, add stdout writers 2025-02-25 10:21:48 +01:00
Michael Panzlaff
d61bf212f5 Fix 'make -B', don't fail if $(VAR) already exists 2025-02-03 17:02:13 +01:00
72 changed files with 3703 additions and 2453 deletions

View File

@ -1,331 +0,0 @@
# See: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions
# Workflow name
name: Release
# Run on tag push
on:
push:
tags:
- '**'
jobs:
#
# Build on AlmaLinux 8.5 using golang-1.18.2
#
AlmaLinux-RPM-build:
runs-on: ubuntu-latest
# See: https://hub.docker.com/_/almalinux
container: almalinux:8.5
# The job outputs link to the outputs of the 'rpmrename' step
# Only job outputs can be used in child jobs
outputs:
rpm : ${{steps.rpmrename.outputs.RPM}}
srpm : ${{steps.rpmrename.outputs.SRPM}}
steps:
# Use dnf to install development packages
- name: Install development packages
run: |
dnf --assumeyes group install "Development Tools" "RPM Development Tools"
dnf --assumeyes install wget openssl-devel diffutils delve which npm
dnf --assumeyes install 'dnf-command(builddep)'
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
# Use dnf to install build dependencies
- name: Install build dependencies
run: |
wget -q http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/golang-1.18.2-1.module_el8.7.0+1173+5d37c0fd.x86_64.rpm \
http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/golang-bin-1.18.2-1.module_el8.7.0+1173+5d37c0fd.x86_64.rpm \
http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/golang-src-1.18.2-1.module_el8.7.0+1173+5d37c0fd.noarch.rpm \
http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/go-toolset-1.18.2-1.module_el8.7.0+1173+5d37c0fd.x86_64.rpm
rpm -i go*.rpm
npm install --global yarn rollup svelte rollup-plugin-svelte
#dnf --assumeyes builddep build/package/cc-backend.spec
- name: RPM build ClusterCockpit
id: rpmbuild
run: make RPM
# AlmaLinux 8.5 is a derivate of RedHat Enterprise Linux 8 (UBI8),
# so the created RPM both contain the substring 'el8' in the RPM file names
# This step replaces the substring 'el8' to 'alma85'. It uses the move operation
# because it is unclear whether the default AlmaLinux 8.5 container contains the
# 'rename' command. This way we also get the new names for output.
- name: Rename RPMs (s/el8/alma85/)
id: rpmrename
run: |
OLD_RPM="${{steps.rpmbuild.outputs.RPM}}"
OLD_SRPM="${{steps.rpmbuild.outputs.SRPM}}"
NEW_RPM="${OLD_RPM/el8/alma85}"
NEW_SRPM=${OLD_SRPM/el8/alma85}
mv "${OLD_RPM}" "${NEW_RPM}"
mv "${OLD_SRPM}" "${NEW_SRPM}"
echo "::set-output name=SRPM::${NEW_SRPM}"
echo "::set-output name=RPM::${NEW_RPM}"
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v2
with:
name: cc-backend RPM for AlmaLinux 8.5
path: ${{ steps.rpmrename.outputs.RPM }}
- name: Save SRPM as artifact
uses: actions/upload-artifact@v2
with:
name: cc-backend SRPM for AlmaLinux 8.5
path: ${{ steps.rpmrename.outputs.SRPM }}
#
# Build on UBI 8 using golang-1.18.2
#
UBI-8-RPM-build:
runs-on: ubuntu-latest
# See: https://catalog.redhat.com/software/containers/ubi8/ubi/5c359854d70cc534b3a3784e?container-tabs=gti
container: registry.access.redhat.com/ubi8/ubi:8.5-226.1645809065
# The job outputs link to the outputs of the 'rpmbuild' step
outputs:
rpm : ${{steps.rpmbuild.outputs.RPM}}
srpm : ${{steps.rpmbuild.outputs.SRPM}}
steps:
# Use dnf to install development packages
- name: Install development packages
run: dnf --assumeyes --disableplugin=subscription-manager install rpm-build go-srpm-macros rpm-build-libs rpm-libs gcc make python38 git wget openssl-devel diffutils delve which
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
# Use dnf to install build dependencies
- name: Install build dependencies
run: |
wget -q http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/golang-1.18.2-1.module_el8.7.0+1173+5d37c0fd.x86_64.rpm \
http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/golang-bin-1.18.2-1.module_el8.7.0+1173+5d37c0fd.x86_64.rpm \
http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/golang-src-1.18.2-1.module_el8.7.0+1173+5d37c0fd.noarch.rpm \
http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/go-toolset-1.18.2-1.module_el8.7.0+1173+5d37c0fd.x86_64.rpm
rpm -i go*.rpm
dnf --assumeyes --disableplugin=subscription-manager install npm
npm install --global yarn rollup svelte rollup-plugin-svelte
#dnf --assumeyes builddep build/package/cc-backend.spec
- name: RPM build ClusterCockpit
id: rpmbuild
run: make RPM
# See: https://github.com/actions/upload-artifact
- name: Save RPM as artifact
uses: actions/upload-artifact@v2
with:
name: cc-backend RPM for UBI 8
path: ${{ steps.rpmbuild.outputs.RPM }}
- name: Save SRPM as artifact
uses: actions/upload-artifact@v2
with:
name: cc-backend SRPM for UBI 8
path: ${{ steps.rpmbuild.outputs.SRPM }}
#
# Build on Ubuntu 20.04 using official go 1.19.1 package
#
Ubuntu-focal-build:
runs-on: ubuntu-latest
container: ubuntu:20.04
# The job outputs link to the outputs of the 'debrename' step
# Only job outputs can be used in child jobs
outputs:
deb : ${{steps.debrename.outputs.DEB}}
steps:
# Use apt to install development packages
- name: Install development packages
run: |
apt update && apt --assume-yes upgrade
apt --assume-yes install build-essential sed git wget bash
apt --assume-yes install npm
npm install --global yarn rollup svelte rollup-plugin-svelte
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
- name: Install Golang
run: |
wget -q https://go.dev/dl/go1.19.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.19.1.linux-amd64.tar.gz
export PATH=/usr/local/go/bin:/usr/local/go/pkg/tool/linux_amd64:$PATH
go version
- name: DEB build ClusterCockpit
id: dpkg-build
run: |
ls -la
pwd
env
export PATH=/usr/local/go/bin:/usr/local/go/pkg/tool/linux_amd64:$PATH
git config --global --add safe.directory $(pwd)
make DEB
- name: Rename DEB (add '_ubuntu20.04')
id: debrename
run: |
OLD_DEB_NAME=$(echo "${{steps.dpkg-build.outputs.DEB}}" | rev | cut -d '.' -f 2- | rev)
NEW_DEB_FILE="${OLD_DEB_NAME}_ubuntu20.04.deb"
mv "${{steps.dpkg-build.outputs.DEB}}" "${NEW_DEB_FILE}"
echo "::set-output name=DEB::${NEW_DEB_FILE}"
# See: https://github.com/actions/upload-artifact
- name: Save DEB as artifact
uses: actions/upload-artifact@v2
with:
name: cc-backend DEB for Ubuntu 20.04
path: ${{ steps.debrename.outputs.DEB }}
#
# Build on Ubuntu 20.04 using official go 1.19.1 package
#
Ubuntu-jammy-build:
runs-on: ubuntu-latest
container: ubuntu:22.04
# The job outputs link to the outputs of the 'debrename' step
# Only job outputs can be used in child jobs
outputs:
deb : ${{steps.debrename.outputs.DEB}}
steps:
# Use apt to install development packages
- name: Install development packages
run: |
apt update && apt --assume-yes upgrade
apt --assume-yes install build-essential sed git wget bash npm
npm install --global yarn rollup svelte rollup-plugin-svelte
# Checkout git repository and submodules
# fetch-depth must be 0 to use git describe
# See: https://github.com/marketplace/actions/checkout
- name: Checkout
uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
# Use official golang package
- name: Install Golang
run: |
wget -q https://go.dev/dl/go1.19.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.19.1.linux-amd64.tar.gz
export PATH=/usr/local/go/bin:/usr/local/go/pkg/tool/linux_amd64:$PATH
go version
- name: DEB build ClusterCockpit
id: dpkg-build
run: |
ls -la
pwd
env
export PATH=/usr/local/go/bin:/usr/local/go/pkg/tool/linux_amd64:$PATH
git config --global --add safe.directory $(pwd)
make DEB
- name: Rename DEB (add '_ubuntu22.04')
id: debrename
run: |
OLD_DEB_NAME=$(echo "${{steps.dpkg-build.outputs.DEB}}" | rev | cut -d '.' -f 2- | rev)
NEW_DEB_FILE="${OLD_DEB_NAME}_ubuntu22.04.deb"
mv "${{steps.dpkg-build.outputs.DEB}}" "${NEW_DEB_FILE}"
echo "::set-output name=DEB::${NEW_DEB_FILE}"
# See: https://github.com/actions/upload-artifact
- name: Save DEB as artifact
uses: actions/upload-artifact@v2
with:
name: cc-backend DEB for Ubuntu 22.04
path: ${{ steps.debrename.outputs.DEB }}
#
# Create release with fresh RPMs
#
Release:
runs-on: ubuntu-latest
# We need the RPMs, so add dependency
needs: [AlmaLinux-RPM-build, UBI-8-RPM-build, Ubuntu-focal-build, Ubuntu-jammy-build]
steps:
# See: https://github.com/actions/download-artifact
- name: Download AlmaLinux 8.5 RPM
uses: actions/download-artifact@v2
with:
name: cc-backend RPM for AlmaLinux 8.5
- name: Download AlmaLinux 8.5 SRPM
uses: actions/download-artifact@v2
with:
name: cc-backend SRPM for AlmaLinux 8.5
- name: Download UBI 8 RPM
uses: actions/download-artifact@v2
with:
name: cc-backend RPM for UBI 8
- name: Download UBI 8 SRPM
uses: actions/download-artifact@v2
with:
name: cc-backend SRPM for UBI 8
- name: Download Ubuntu 20.04 DEB
uses: actions/download-artifact@v2
with:
name: cc-backend DEB for Ubuntu 20.04
- name: Download Ubuntu 22.04 DEB
uses: actions/download-artifact@v2
with:
name: cc-backend DEB for Ubuntu 22.04
# The download actions do not publish the name of the downloaded file,
# so we re-use the job outputs of the parent jobs. The files are all
# downloaded to the current folder.
# The gh-release action afterwards does not accept file lists but all
# files have to be listed at 'files'. The step creates one output per
# RPM package (2 per distro)
- name: Set RPM variables
id: files
run: |
ALMA_85_RPM=$(basename "${{ needs.AlmaLinux-RPM-build.outputs.rpm}}")
ALMA_85_SRPM=$(basename "${{ needs.AlmaLinux-RPM-build.outputs.srpm}}")
UBI_8_RPM=$(basename "${{ needs.UBI-8-RPM-build.outputs.rpm}}")
UBI_8_SRPM=$(basename "${{ needs.UBI-8-RPM-build.outputs.srpm}}")
U_2004_DEB=$(basename "${{ needs.Ubuntu-focal-build.outputs.deb}}")
U_2204_DEB=$(basename "${{ needs.Ubuntu-jammy-build.outputs.deb}}")
echo "ALMA_85_RPM::${ALMA_85_RPM}"
echo "ALMA_85_SRPM::${ALMA_85_SRPM}"
echo "UBI_8_RPM::${UBI_8_RPM}"
echo "UBI_8_SRPM::${UBI_8_SRPM}"
echo "U_2004_DEB::${U_2004_DEB}"
echo "U_2204_DEB::${U_2204_DEB}"
echo "::set-output name=ALMA_85_RPM::${ALMA_85_RPM}"
echo "::set-output name=ALMA_85_SRPM::${ALMA_85_SRPM}"
echo "::set-output name=UBI_8_RPM::${UBI_8_RPM}"
echo "::set-output name=UBI_8_SRPM::${UBI_8_SRPM}"
echo "::set-output name=U_2004_DEB::${U_2004_DEB}"
echo "::set-output name=U_2204_DEB::${U_2204_DEB}"
# See: https://github.com/softprops/action-gh-release
- name: Release
uses: softprops/action-gh-release@v1
if: startsWith(github.ref, 'refs/tags/')
with:
name: cc-backend-${{github.ref_name}}
files: |
${{ steps.files.outputs.ALMA_85_RPM }}
${{ steps.files.outputs.ALMA_85_SRPM }}
${{ steps.files.outputs.UBI_8_RPM }}
${{ steps.files.outputs.UBI_8_SRPM }}
${{ steps.files.outputs.U_2004_DEB }}
${{ steps.files.outputs.U_2204_DEB }}

View File

@ -7,7 +7,7 @@ jobs:
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: 1.22.x
go-version: 1.24.x
- name: Checkout code
uses: actions/checkout@v3
- name: Build, Vet & Test

View File

@ -2,7 +2,7 @@ TARGET = ./cc-backend
VAR = ./var
CFG = config.json .env
FRONTEND = ./web/frontend
VERSION = 1.4.3
VERSION = 1.4.4
GIT_HASH := $(shell git rev-parse --short HEAD || echo 'development')
CURRENT_TIME = $(shell date +"%Y-%m-%d:T%H:%M:%S")
LD_FLAGS = '-s -X main.date=${CURRENT_TIME} -X main.version=${VERSION} -X main.commit=${GIT_HASH}'

View File

@ -1,4 +1,4 @@
# `cc-backend` version 1.4.3
# `cc-backend` version 1.4.4
Supports job archive version 2 and database version 8.
@ -6,6 +6,20 @@ This is a bug fix release of `cc-backend`, the API backend and frontend
implementation of ClusterCockpit.
For release specific notes visit the [ClusterCockpit Documentation](https://clusterockpit.org/docs/release/).
## Breaking changes
The option `apiAllowedIPs` is now a required configuration attribute in
`config.json`. This option restricts access to the admin API.
To retain the previous behavior that the API is per default accessible from
everywhere set:
```json
"apiAllowedIPs": [
"*"
]
```
## Breaking changes for minor release 1.4.x
- You need to perform a database migration. Depending on your database size the
@ -22,21 +36,7 @@ For release specific notes visit the [ClusterCockpit Documentation](https://clus
## New features
- Detailed Node List
- Adds new routes `/systems/list/$cluster` and `/systems/list/$cluster/$subcluster`
- Displays live, scoped metric data requested from the nodes indepenent of jobs
- Color Blind Mode
- Set on a per-user basis in options
- Applies to plot data, plot background color, statsseries colors, roofline timescale
- Job-View metric selection is now persisted based on the jobs subcluster.
Helpful for heterogeneous subcluster configurations.
- Histogram Bin Select in User-View
- Metric-Histograms: `10 Bins` now default, selectable options `20, 50, 100`
- Job-Duration-Histogram: `48h in 1h Bins` now default, selectable options:
- `60 minutes in 1 minute Bins`
- `12 hours in 10 minute Bins`
- `3 days in 6 hour Bins`
- `7 days in 12 hour Bins`
- Enable to delete tags from the web interface
## Known issues

View File

@ -137,11 +137,6 @@ type JobMetricWithName {
metric: JobMetric!
}
type JobMetricStatWithName {
name: String!
stats: MetricStatistics!
}
type JobMetric {
unit: Unit
timestep: Int!
@ -156,6 +151,30 @@ type Series {
data: [NullableFloat!]!
}
type StatsSeries {
mean: [NullableFloat!]!
median: [NullableFloat!]!
min: [NullableFloat!]!
max: [NullableFloat!]!
}
type JobStatsWithScope {
name: String!
scope: MetricScope!
stats: [ScopedStats!]!
}
type ScopedStats {
hostname: String!
id: String
data: MetricStatistics!
}
type JobStats {
name: String!
stats: MetricStatistics!
}
type Unit {
base: String!
prefix: String
@ -167,13 +186,6 @@ type MetricStatistics {
max: Float!
}
type StatsSeries {
mean: [NullableFloat!]!
median: [NullableFloat!]!
min: [NullableFloat!]!
max: [NullableFloat!]!
}
type MetricFootprints {
metric: String!
data: [NullableFloat!]!
@ -247,7 +259,8 @@ type Query {
job(id: ID!): Job
jobMetrics(id: ID!, metrics: [String!], scopes: [MetricScope!], resolution: Int): [JobMetricWithName!]!
jobMetricStats(id: ID!, metrics: [String!]): [JobMetricStatWithName!]!
jobStats(id: ID!, metrics: [String!]): [JobStats!]!
scopedJobStats(id: ID!, metrics: [String!], scopes: [MetricScope!]): [JobStatsWithScope!]!
jobsFootprints(filter: [JobFilter!], metrics: [String!]!): Footprints
jobs(filter: [JobFilter!], page: PageRequest, order: OrderByInput): JobResultList!
@ -264,6 +277,7 @@ type Mutation {
deleteTag(id: ID!): ID!
addTagsToJob(job: ID!, tagIds: [ID!]!): [Tag!]!
removeTagsFromJob(job: ID!, tagIds: [ID!]!): [Tag!]!
removeTagFromList(tagIds: [ID!]!): [Int!]!
updateConfiguration(name: String!, value: String!): String
}

View File

@ -15,9 +15,8 @@
"version": "1.0.0"
},
"host": "localhost:8080",
"basePath": "/api",
"paths": {
"/clusters/": {
"/api/clusters/": {
"get": {
"security": [
{
@ -74,7 +73,7 @@
}
}
},
"/jobs/": {
"/api/jobs/": {
"get": {
"security": [
{
@ -169,7 +168,7 @@
}
}
},
"/jobs/delete_job/": {
"/api/jobs/delete_job/": {
"delete": {
"security": [
{
@ -244,7 +243,7 @@
}
}
},
"/jobs/delete_job/{id}": {
"/api/jobs/delete_job/{id}": {
"delete": {
"security": [
{
@ -314,7 +313,7 @@
}
}
},
"/jobs/delete_job_before/{ts}": {
"/api/jobs/delete_job_before/{ts}": {
"delete": {
"security": [
{
@ -384,7 +383,7 @@
}
}
},
"/jobs/edit_meta/{id}": {
"/api/jobs/edit_meta/{id}": {
"post": {
"security": [
{
@ -454,7 +453,7 @@
}
}
},
"/jobs/start_job/": {
"/api/jobs/start_job/": {
"post": {
"security": [
{
@ -523,7 +522,7 @@
}
}
},
"/jobs/stop_job/": {
"/api/jobs/stop_job/": {
"post": {
"security": [
{
@ -595,7 +594,7 @@
}
}
},
"/jobs/tag_job/{id}": {
"/api/jobs/tag_job/{id}": {
"post": {
"security": [
{
@ -668,7 +667,7 @@
}
}
},
"/jobs/{id}": {
"/api/jobs/{id}": {
"get": {
"security": [
{
@ -827,185 +826,14 @@
}
}
},
"/notice/": {
"post": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Modifies the content of notice.txt, shown as notice box on the homepage.\nIf more than one formValue is set then only the highest priority field is used.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"consumes": [
"multipart/form-data"
],
"produces": [
"text/plain"
],
"tags": [
"User"
],
"summary": "Updates or empties the notice box content",
"parameters": [
{
"type": "string",
"description": "Priority 1: New content to display",
"name": "new-content",
"in": "formData"
}
],
"responses": {
"200": {
"description": "Success Response Message",
"schema": {
"type": "string"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
}
},
"403": {
"description": "Forbidden",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: The user could not be updated",
"schema": {
"type": "string"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
}
}
}
}
},
"/user/{id}": {
"post": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Modifies user defined by username (id) in one of four possible ways.\nIf more than one formValue is set then only the highest priority field is used.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"consumes": [
"multipart/form-data"
],
"produces": [
"text/plain"
],
"tags": [
"User"
],
"summary": "Updates an existing user",
"parameters": [
{
"type": "string",
"description": "Database ID of User",
"name": "id",
"in": "path",
"required": true
},
{
"enum": [
"admin",
"support",
"manager",
"user",
"api"
],
"type": "string",
"description": "Priority 1: Role to add",
"name": "add-role",
"in": "formData"
},
{
"enum": [
"admin",
"support",
"manager",
"user",
"api"
],
"type": "string",
"description": "Priority 2: Role to remove",
"name": "remove-role",
"in": "formData"
},
{
"type": "string",
"description": "Priority 3: Project to add",
"name": "add-project",
"in": "formData"
},
{
"type": "string",
"description": "Priority 4: Project to remove",
"name": "remove-project",
"in": "formData"
}
],
"responses": {
"200": {
"description": "Success Response Message",
"schema": {
"type": "string"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
}
},
"403": {
"description": "Forbidden",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: The user could not be updated",
"schema": {
"type": "string"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
}
}
}
}
},
"/users/": {
"/api/users/": {
"get": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Returns a JSON-encoded list of users.\nRequired query-parameter defines if all users or only users with additional special roles are returned.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"description": "Returns a JSON-encoded list of users.\nRequired query-parameter defines if all users or only users with additional special roles are returned.",
"produces": [
"application/json"
],
@ -1057,70 +885,111 @@
}
}
}
},
"post": {
}
},
"/jobs/tag_job/{id}": {
"delete": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "User specified in form data will be saved to database.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"description": "Removes tag(s) from a job specified by DB ID. Name and Type of Tag(s) must match.\nTag Scope is required for matching, options: \"global\", \"admin\". Private tags can not be deleted via API.\nIf tagged job is already finished: Tag will be removed from respective archive files.",
"consumes": [
"multipart/form-data"
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"Job add and modify"
],
"summary": "Removes one or more tags from a job",
"parameters": [
{
"type": "integer",
"description": "Job Database ID",
"name": "id",
"in": "path",
"required": true
},
{
"description": "Array of tag-objects to remove",
"name": "request",
"in": "body",
"required": true,
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/api.ApiTag"
}
}
}
],
"responses": {
"200": {
"description": "Updated job resource",
"schema": {
"$ref": "#/definitions/schema.Job"
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
},
"404": {
"description": "Job or tag does not exist",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
}
}
}
},
"/tags/": {
"delete": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Removes tags by type and name. Name and Type of Tag(s) must match.\nTag Scope is required for matching, options: \"global\", \"admin\". Private tags can not be deleted via API.\nTag wills be removed from respective archive files.",
"consumes": [
"application/json"
],
"produces": [
"text/plain"
],
"tags": [
"User"
"Tag remove"
],
"summary": "Adds a new user",
"summary": "Removes all tags and job-relations for type:name tuple",
"parameters": [
{
"type": "string",
"description": "Unique user ID",
"name": "username",
"in": "formData",
"required": true
},
{
"type": "string",
"description": "User password",
"name": "password",
"in": "formData",
"required": true
},
{
"enum": [
"admin",
"support",
"manager",
"user",
"api"
],
"type": "string",
"description": "User role",
"name": "role",
"in": "formData",
"required": true
},
{
"type": "string",
"description": "Managed project, required for new manager role user",
"name": "project",
"in": "formData"
},
{
"type": "string",
"description": "Users name",
"name": "name",
"in": "formData"
},
{
"type": "string",
"description": "Users email",
"name": "email",
"in": "formData"
"description": "Array of tag-objects to remove",
"name": "request",
"in": "body",
"required": true,
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/api.ApiTag"
}
}
}
],
"responses": {
@ -1133,93 +1002,25 @@
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
},
"403": {
"description": "Forbidden",
"404": {
"description": "Job or tag does not exist",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: creating user failed",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
}
}
}
},
"delete": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "User defined by username in form data will be deleted from database.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"consumes": [
"multipart/form-data"
],
"produces": [
"text/plain"
],
"tags": [
"User"
],
"summary": "Deletes a user",
"parameters": [
{
"type": "string",
"description": "User ID to delete",
"name": "username",
"in": "formData",
"required": true
}
],
"responses": {
"200": {
"description": "User deleted successfully"
},
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
}
},
"403": {
"description": "Forbidden",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: deleting user failed",
"schema": {
"type": "string"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
}
}

View File

@ -1,4 +1,3 @@
basePath: /api
definitions:
api.ApiReturnedUser:
properties:
@ -671,7 +670,7 @@ info:
title: ClusterCockpit REST API
version: 1.0.0
paths:
/clusters/:
/api/clusters/:
get:
description: Get a list of all cluster configs. Specific cluster can be requested
using query parameter.
@ -708,7 +707,7 @@ paths:
summary: Lists all cluster configs
tags:
- Cluster query
/jobs/:
/api/jobs/:
get:
description: |-
Get a list of all jobs. Filters can be applied using query parameters.
@ -773,7 +772,7 @@ paths:
summary: Lists all jobs
tags:
- Job query
/jobs/{id}:
/api/jobs/{id}:
get:
description: |-
Job to get is specified by database ID
@ -882,7 +881,7 @@ paths:
summary: Get job meta and configurable metric data
tags:
- Job query
/jobs/delete_job/:
/api/jobs/delete_job/:
delete:
consumes:
- application/json
@ -932,7 +931,7 @@ paths:
summary: Remove a job from the sql database
tags:
- Job remove
/jobs/delete_job/{id}:
/api/jobs/delete_job/{id}:
delete:
description: Job to remove is specified by database ID. This will not remove
the job from the job archive.
@ -979,7 +978,7 @@ paths:
summary: Remove a job from the sql database
tags:
- Job remove
/jobs/delete_job_before/{ts}:
/api/jobs/delete_job_before/{ts}:
delete:
description: Remove all jobs with start time before timestamp. The jobs will
not be removed from the job archive.
@ -1026,7 +1025,7 @@ paths:
summary: Remove a job from the sql database
tags:
- Job remove
/jobs/edit_meta/{id}:
/api/jobs/edit_meta/{id}:
post:
consumes:
- application/json
@ -1073,7 +1072,7 @@ paths:
summary: Edit meta-data json
tags:
- Job add and modify
/jobs/start_job/:
/api/jobs/start_job/:
post:
consumes:
- application/json
@ -1120,7 +1119,7 @@ paths:
summary: Adds a new job as "running"
tags:
- Job add and modify
/jobs/stop_job/:
/api/jobs/stop_job/:
post:
description: |-
Job to stop is specified by request body. All fields are required in this case.
@ -1168,7 +1167,7 @@ paths:
summary: Marks job as completed and triggers archiving
tags:
- Job add and modify
/jobs/tag_job/{id}:
/api/jobs/tag_job/{id}:
post:
consumes:
- application/json
@ -1218,173 +1217,11 @@ paths:
summary: Adds one or more tags to a job
tags:
- Job add and modify
/notice/:
post:
consumes:
- multipart/form-data
description: |-
Modifies the content of notice.txt, shown as notice box on the homepage.
If more than one formValue is set then only the highest priority field is used.
Only accessible from IPs registered with apiAllowedIPs configuration option.
parameters:
- description: 'Priority 1: New content to display'
in: formData
name: new-content
type: string
produces:
- text/plain
responses:
"200":
description: Success Response Message
schema:
type: string
"400":
description: Bad Request
schema:
type: string
"401":
description: Unauthorized
schema:
type: string
"403":
description: Forbidden
schema:
type: string
"422":
description: 'Unprocessable Entity: The user could not be updated'
schema:
type: string
"500":
description: Internal Server Error
schema:
type: string
security:
- ApiKeyAuth: []
summary: Updates or empties the notice box content
tags:
- User
/user/{id}:
post:
consumes:
- multipart/form-data
description: |-
Modifies user defined by username (id) in one of four possible ways.
If more than one formValue is set then only the highest priority field is used.
Only accessible from IPs registered with apiAllowedIPs configuration option.
parameters:
- description: Database ID of User
in: path
name: id
required: true
type: string
- description: 'Priority 1: Role to add'
enum:
- admin
- support
- manager
- user
- api
in: formData
name: add-role
type: string
- description: 'Priority 2: Role to remove'
enum:
- admin
- support
- manager
- user
- api
in: formData
name: remove-role
type: string
- description: 'Priority 3: Project to add'
in: formData
name: add-project
type: string
- description: 'Priority 4: Project to remove'
in: formData
name: remove-project
type: string
produces:
- text/plain
responses:
"200":
description: Success Response Message
schema:
type: string
"400":
description: Bad Request
schema:
type: string
"401":
description: Unauthorized
schema:
type: string
"403":
description: Forbidden
schema:
type: string
"422":
description: 'Unprocessable Entity: The user could not be updated'
schema:
type: string
"500":
description: Internal Server Error
schema:
type: string
security:
- ApiKeyAuth: []
summary: Updates an existing user
tags:
- User
/users/:
delete:
consumes:
- multipart/form-data
description: |-
User defined by username in form data will be deleted from database.
Only accessible from IPs registered with apiAllowedIPs configuration option.
parameters:
- description: User ID to delete
in: formData
name: username
required: true
type: string
produces:
- text/plain
responses:
"200":
description: User deleted successfully
"400":
description: Bad Request
schema:
type: string
"401":
description: Unauthorized
schema:
type: string
"403":
description: Forbidden
schema:
type: string
"422":
description: 'Unprocessable Entity: deleting user failed'
schema:
type: string
"500":
description: Internal Server Error
schema:
type: string
security:
- ApiKeyAuth: []
summary: Deletes a user
tags:
- User
/api/users/:
get:
description: |-
Returns a JSON-encoded list of users.
Required query-parameter defines if all users or only users with additional special roles are returned.
Only accessible from IPs registered with apiAllowedIPs configuration option.
parameters:
- description: If returned list should contain all users or only users with
additional special roles
@ -1422,46 +1259,73 @@ paths:
summary: Returns a list of users
tags:
- User
post:
/jobs/tag_job/{id}:
delete:
consumes:
- multipart/form-data
- application/json
description: |-
User specified in form data will be saved to database.
Only accessible from IPs registered with apiAllowedIPs configuration option.
Removes tag(s) from a job specified by DB ID. Name and Type of Tag(s) must match.
Tag Scope is required for matching, options: "global", "admin". Private tags can not be deleted via API.
If tagged job is already finished: Tag will be removed from respective archive files.
parameters:
- description: Unique user ID
in: formData
name: username
- description: Job Database ID
in: path
name: id
required: true
type: string
- description: User password
in: formData
name: password
type: integer
- description: Array of tag-objects to remove
in: body
name: request
required: true
type: string
- description: User role
enum:
- admin
- support
- manager
- user
- api
in: formData
name: role
schema:
items:
$ref: '#/definitions/api.ApiTag'
type: array
produces:
- application/json
responses:
"200":
description: Updated job resource
schema:
$ref: '#/definitions/schema.Job'
"400":
description: Bad Request
schema:
$ref: '#/definitions/api.ErrorResponse'
"401":
description: Unauthorized
schema:
$ref: '#/definitions/api.ErrorResponse'
"404":
description: Job or tag does not exist
schema:
$ref: '#/definitions/api.ErrorResponse'
"500":
description: Internal Server Error
schema:
$ref: '#/definitions/api.ErrorResponse'
security:
- ApiKeyAuth: []
summary: Removes one or more tags from a job
tags:
- Job add and modify
/tags/:
delete:
consumes:
- application/json
description: |-
Removes tags by type and name. Name and Type of Tag(s) must match.
Tag Scope is required for matching, options: "global", "admin". Private tags can not be deleted via API.
Tag wills be removed from respective archive files.
parameters:
- description: Array of tag-objects to remove
in: body
name: request
required: true
type: string
- description: Managed project, required for new manager role user
in: formData
name: project
type: string
- description: Users name
in: formData
name: name
type: string
- description: Users email
in: formData
name: email
type: string
schema:
items:
$ref: '#/definitions/api.ApiTag'
type: array
produces:
- text/plain
responses:
@ -1472,28 +1336,24 @@ paths:
"400":
description: Bad Request
schema:
type: string
$ref: '#/definitions/api.ErrorResponse'
"401":
description: Unauthorized
schema:
type: string
"403":
description: Forbidden
$ref: '#/definitions/api.ErrorResponse'
"404":
description: Job or tag does not exist
schema:
type: string
"422":
description: 'Unprocessable Entity: creating user failed'
schema:
type: string
$ref: '#/definitions/api.ErrorResponse'
"500":
description: Internal Server Error
schema:
type: string
$ref: '#/definitions/api.ErrorResponse'
security:
- ApiKeyAuth: []
summary: Adds a new user
summary: Removes all tags and job-relations for type:name tuple
tags:
- User
- Tag remove
securityDefinitions:
ApiKeyAuth:
in: header

View File

@ -12,7 +12,7 @@ var (
)
func cliInit() {
flag.BoolVar(&flagInit, "init", false, "Setup var directory, initialize swlite database file, config.json and .env")
flag.BoolVar(&flagInit, "init", false, "Setup var directory, initialize sqlite database file, config.json and .env")
flag.BoolVar(&flagReinitDB, "init-db", false, "Go through job-archive and re-initialize the 'job', 'tag', and 'jobtag' tables (all running jobs will be lost!)")
flag.BoolVar(&flagSyncLDAP, "sync-ldap", false, "Sync the 'hpc_user' table with ldap")
flag.BoolVar(&flagServer, "server", false, "Start a server, continues listening on port after initialization and argument handling")
@ -24,10 +24,10 @@ func cliInit() {
flag.BoolVar(&flagForceDB, "force-db", false, "Force database version, clear dirty flag and exit")
flag.BoolVar(&flagLogDateTime, "logdate", false, "Set this flag to add date and time to log messages")
flag.StringVar(&flagConfigFile, "config", "./config.json", "Specify alternative path to `config.json`")
flag.StringVar(&flagNewUser, "add-user", "", "Add a new user. Argument format: `<username>:[admin,support,manager,api,user]:<password>`")
flag.StringVar(&flagDelUser, "del-user", "", "Remove user by `username`")
flag.StringVar(&flagNewUser, "add-user", "", "Add a new user. Argument format: <username>:[admin,support,manager,api,user]:<password>")
flag.StringVar(&flagDelUser, "del-user", "", "Remove a existing user. Argument format: <username>")
flag.StringVar(&flagGenJWT, "jwt", "", "Generate and print a JWT for the user specified by its `username`")
flag.StringVar(&flagImportJob, "import-job", "", "Import a job. Argument format: `<path-to-meta.json>:<path-to-data.json>,...`")
flag.StringVar(&flagLogLevel, "loglevel", "warn", "Sets the logging level: `[debug,info,warn (default),err,fatal,crit]`")
flag.StringVar(&flagLogLevel, "loglevel", "warn", "Sets the logging level: `[debug, info (default), warn, err, crit]`")
flag.Parse()
}

View File

@ -5,7 +5,6 @@
package main
import (
"fmt"
"os"
"github.com/ClusterCockpit/cc-backend/internal/repository"
@ -33,6 +32,18 @@ const configString = `
"jwts": {
"max-age": "2000h"
},
"apiAllowedIPs": [
"*"
],
"enable-resampling": {
"trigger": 30,
"resolutions": [
600,
300,
120,
60
]
},
"clusters": [
{
"name": "name",
@ -62,24 +73,23 @@ const configString = `
func initEnv() {
if util.CheckFileExists("var") {
fmt.Print("Directory ./var already exists. Exiting!\n")
os.Exit(0)
log.Exit("Directory ./var already exists. Cautiously exiting application initialization.")
}
if err := os.WriteFile("config.json", []byte(configString), 0o666); err != nil {
log.Fatalf("Writing config.json failed: %s", err.Error())
log.Abortf("Could not write default ./config.json with permissions '0o666'. Application initialization failed, exited.\nError: %s\n", err.Error())
}
if err := os.WriteFile(".env", []byte(envString), 0o666); err != nil {
log.Fatalf("Writing .env failed: %s", err.Error())
log.Abortf("Could not write default ./.env file with permissions '0o666'. Application initialization failed, exited.\nError: %s\n", err.Error())
}
if err := os.Mkdir("var", 0o777); err != nil {
log.Fatalf("Mkdir var failed: %s", err.Error())
log.Abortf("Could not create default ./var folder with permissions '0o777'. Application initialization failed, exited.\nError: %s\n", err.Error())
}
err := repository.MigrateDB("sqlite3", "./var/job.db")
if err != nil {
log.Fatalf("Initialize job.db failed: %s", err.Error())
log.Abortf("Could not initialize default sqlite3 database as './var/job.db'. Application initialization failed, exited.\nError: %s\n", err.Error())
}
}

View File

@ -61,15 +61,23 @@ func main() {
// Apply config flags for pkg/log
log.Init(flagLogLevel, flagLogDateTime)
// If init flag set, run tasks here before any file dependencies cause errors
if flagInit {
initEnv()
log.Exit("Successfully setup environment!\n" +
"Please review config.json and .env and adjust it to your needs.\n" +
"Add your job-archive at ./var/job-archive.")
}
// See https://github.com/google/gops (Runtime overhead is almost zero)
if flagGops {
if err := agent.Listen(agent.Options{}); err != nil {
log.Fatalf("gops/agent.Listen failed: %s", err.Error())
log.Abortf("Could not start gops agent with 'gops/agent.Listen(agent.Options{})'. Application startup failed, exited.\nError: %s\n", err.Error())
}
}
if err := runtimeEnv.LoadEnv("./.env"); err != nil && !os.IsNotExist(err) {
log.Fatalf("parsing './.env' file failed: %s", err.Error())
log.Abortf("Could not parse existing .env file at location './.env'. Application startup failed, exited.\nError: %s\n", err.Error())
}
// Initialize sub-modules and handle command line flags.
@ -87,37 +95,29 @@ func main() {
if flagMigrateDB {
err := repository.MigrateDB(config.Keys.DBDriver, config.Keys.DB)
if err != nil {
log.Fatal(err)
log.Abortf("MigrateDB Failed: Could not migrate '%s' database at location '%s' to version %d.\nError: %s\n", config.Keys.DBDriver, config.Keys.DB, repository.Version, err.Error())
}
os.Exit(0)
log.Exitf("MigrateDB Success: Migrated '%s' database at location '%s' to version %d.\n", config.Keys.DBDriver, config.Keys.DB, repository.Version)
}
if flagRevertDB {
err := repository.RevertDB(config.Keys.DBDriver, config.Keys.DB)
if err != nil {
log.Fatal(err)
log.Abortf("RevertDB Failed: Could not revert '%s' database at location '%s' to version %d.\nError: %s\n", config.Keys.DBDriver, config.Keys.DB, (repository.Version - 1), err.Error())
}
os.Exit(0)
log.Exitf("RevertDB Success: Reverted '%s' database at location '%s' to version %d.\n", config.Keys.DBDriver, config.Keys.DB, (repository.Version - 1))
}
if flagForceDB {
err := repository.ForceDB(config.Keys.DBDriver, config.Keys.DB)
if err != nil {
log.Fatal(err)
log.Abortf("ForceDB Failed: Could not force '%s' database at location '%s' to version %d.\nError: %s\n", config.Keys.DBDriver, config.Keys.DB, repository.Version, err.Error())
}
os.Exit(0)
log.Exitf("ForceDB Success: Forced '%s' database at location '%s' to version %d.\n", config.Keys.DBDriver, config.Keys.DB, repository.Version)
}
repository.Connect(config.Keys.DBDriver, config.Keys.DB)
if flagInit {
initEnv()
fmt.Print("Successfully setup environment!\n")
fmt.Print("Please review config.json and .env and adjust it to your needs.\n")
fmt.Print("Add your job-archive at ./var/job-archive.\n")
os.Exit(0)
}
if !config.Keys.DisableAuthentication {
auth.Init()
@ -125,20 +125,27 @@ func main() {
if flagNewUser != "" {
parts := strings.SplitN(flagNewUser, ":", 3)
if len(parts) != 3 || len(parts[0]) == 0 {
log.Fatal("invalid argument format for user creation")
log.Abortf("Add User: Could not parse supplied argument format: No changes.\n"+
"Want: <username>:[admin,support,manager,api,user]:<password>\n"+
"Have: %s\n", flagNewUser)
}
ur := repository.GetUserRepository()
if err := ur.AddUser(&schema.User{
Username: parts[0], Projects: make([]string, 0), Password: parts[2], Roles: strings.Split(parts[1], ","),
}); err != nil {
log.Fatalf("adding '%s' user authentication failed: %v", parts[0], err)
log.Abortf("Add User: Could not add new user authentication for '%s' and roles '%s'.\nError: %s\n", parts[0], parts[1], err.Error())
} else {
log.Printf("Add User: Added new user '%s' with roles '%s'.\n", parts[0], parts[1])
}
}
if flagDelUser != "" {
ur := repository.GetUserRepository()
if err := ur.DelUser(flagDelUser); err != nil {
log.Fatalf("deleting user failed: %v", err)
log.Abortf("Delete User: Could not delete user '%s' from DB.\nError: %s\n", flagDelUser, err.Error())
} else {
log.Printf("Delete User: Deleted user '%s' from DB.\n", flagDelUser)
}
}
@ -146,60 +153,64 @@ func main() {
if flagSyncLDAP {
if authHandle.LdapAuth == nil {
log.Fatal("cannot sync: LDAP authentication is not configured")
log.Abort("Sync LDAP: LDAP authentication is not configured, could not synchronize. No changes, exited.")
}
if err := authHandle.LdapAuth.Sync(); err != nil {
log.Fatalf("LDAP sync failed: %v", err)
log.Abortf("Sync LDAP: Could not synchronize, failed with error.\nError: %s\n", err.Error())
}
log.Info("LDAP sync successfull")
log.Print("Sync LDAP: LDAP synchronization successfull.")
}
if flagGenJWT != "" {
ur := repository.GetUserRepository()
user, err := ur.GetUser(flagGenJWT)
if err != nil {
log.Fatalf("could not get user from JWT: %v", err)
log.Abortf("JWT: Could not get supplied user '%s' from DB. No changes, exited.\nError: %s\n", flagGenJWT, err.Error())
}
if !user.HasRole(schema.RoleApi) {
log.Warnf("user '%s' does not have the API role", user.Username)
log.Warnf("JWT: User '%s' does not have the role 'api'. REST API endpoints will return error!\n", user.Username)
}
jwt, err := authHandle.JwtAuth.ProvideJWT(user)
if err != nil {
log.Fatalf("failed to provide JWT to user '%s': %v", user.Username, err)
log.Abortf("JWT: User '%s' found in DB, but failed to provide JWT.\nError: %s\n", user.Username, err.Error())
}
fmt.Printf("MAIN > JWT for '%s': %s\n", user.Username, jwt)
log.Printf("JWT: Successfully generated JWT for user '%s': %s\n", user.Username, jwt)
}
} else if flagNewUser != "" || flagDelUser != "" {
log.Fatal("arguments --add-user and --del-user can only be used if authentication is enabled")
log.Abort("Error: Arguments '--add-user' and '--del-user' can only be used if authentication is enabled. No changes, exited.")
}
if err := archive.Init(config.Keys.Archive, config.Keys.DisableArchive); err != nil {
log.Fatalf("failed to initialize archive: %s", err.Error())
log.Abortf("Init: Failed to initialize archive.\nError: %s\n", err.Error())
}
if err := metricdata.Init(); err != nil {
log.Fatalf("failed to initialize metricdata repository: %s", err.Error())
log.Abortf("Init: Failed to initialize metricdata repository.\nError %s\n", err.Error())
}
if flagReinitDB {
if err := importer.InitDB(); err != nil {
log.Fatalf("failed to re-initialize repository DB: %s", err.Error())
log.Abortf("Init DB: Failed to re-initialize repository DB.\nError: %s\n", err.Error())
} else {
log.Print("Init DB: Sucessfully re-initialized repository DB.")
}
}
if flagImportJob != "" {
if err := importer.HandleImportFlag(flagImportJob); err != nil {
log.Fatalf("job import failed: %s", err.Error())
log.Abortf("Import Job: Job import failed.\nError: %s\n", err.Error())
} else {
log.Printf("Import Job: Imported Job '%s' into DB.\n", flagImportJob)
}
}
if !flagServer {
return
log.Exit("No errors, server flag not set. Exiting cc-backend.")
}
archiver.Start(repository.GetJobRepository())

View File

@ -18,6 +18,7 @@ import (
"time"
"github.com/99designs/gqlgen/graphql/handler"
"github.com/99designs/gqlgen/graphql/handler/transport"
"github.com/99designs/gqlgen/graphql/playground"
"github.com/ClusterCockpit/cc-backend/internal/api"
"github.com/ClusterCockpit/cc-backend/internal/archiver"
@ -53,18 +54,29 @@ func serverInit() {
// Setup the http.Handler/Router used by the server
graph.Init()
resolver := graph.GetResolverInstance()
graphQLEndpoint := handler.NewDefaultServer(
graphQLServer := handler.New(
generated.NewExecutableSchema(generated.Config{Resolvers: resolver}))
// graphQLServer.AddTransport(transport.SSE{})
graphQLServer.AddTransport(transport.POST{})
// graphQLServer.AddTransport(transport.Websocket{
// KeepAlivePingInterval: 10 * time.Second,
// Upgrader: websocket.Upgrader{
// CheckOrigin: func(r *http.Request) bool {
// return true
// },
// },
// })
if os.Getenv("DEBUG") != "1" {
// Having this handler means that a error message is returned via GraphQL instead of the connection simply beeing closed.
// The problem with this is that then, no more stacktrace is printed to stderr.
graphQLEndpoint.SetRecoverFunc(func(ctx context.Context, err interface{}) error {
graphQLServer.SetRecoverFunc(func(ctx context.Context, err any) error {
switch e := err.(type) {
case string:
return fmt.Errorf("MAIN > Panic: %s", e)
case error:
return fmt.Errorf("MAIN > Panic caused by: %w", e)
return fmt.Errorf("MAIN > Panic caused by: %s", e.Error())
}
return errors.New("MAIN > Internal server error (panic)")
@ -78,7 +90,7 @@ func serverInit() {
router = mux.NewRouter()
buildInfo := web.Build{Version: version, Hash: commit, Buildtime: date}
info := map[string]interface{}{}
info := map[string]any{}
info["hasOpenIDConnect"] = false
if config.Keys.OpenIDConfig != nil {
@ -208,7 +220,7 @@ func serverInit() {
router.PathPrefix("/swagger/").Handler(httpSwagger.Handler(
httpSwagger.URL("http://" + config.Keys.Addr + "/swagger/doc.json"))).Methods(http.MethodGet)
}
secured.Handle("/query", graphQLEndpoint)
secured.Handle("/query", graphQLServer)
// Send a searchId and then reply with a redirect to a user, or directly send query to job table for jobid and project.
secured.HandleFunc("/search", func(rw http.ResponseWriter, r *http.Request) {
@ -268,7 +280,7 @@ func serverStart() {
// Start http or https server
listener, err := net.Listen("tcp", config.Keys.Addr)
if err != nil {
log.Fatalf("starting http listener failed: %v", err)
log.Abortf("Server Start: Starting http listener on '%s' failed.\nError: %s\n", config.Keys.Addr, err.Error())
}
if !strings.HasSuffix(config.Keys.Addr, ":80") && config.Keys.RedirectHttpTo != "" {
@ -281,7 +293,7 @@ func serverStart() {
cert, err := tls.LoadX509KeyPair(
config.Keys.HttpsCertFile, config.Keys.HttpsKeyFile)
if err != nil {
log.Fatalf("loading X509 keypair failed: %v", err)
log.Abortf("Server Start: Loading X509 keypair failed. Check options 'https-cert-file' and 'https-key-file' in 'config.json'.\nError: %s\n", err.Error())
}
listener = tls.NewListener(listener, &tls.Config{
Certificates: []tls.Certificate{cert},
@ -292,20 +304,20 @@ func serverStart() {
MinVersion: tls.VersionTLS12,
PreferServerCipherSuites: true,
})
fmt.Printf("HTTPS server listening at %s...", config.Keys.Addr)
log.Printf("HTTPS server listening at %s...\n", config.Keys.Addr)
} else {
fmt.Printf("HTTP server listening at %s...", config.Keys.Addr)
log.Printf("HTTP server listening at %s...\n", config.Keys.Addr)
}
//
// Because this program will want to bind to a privileged port (like 80), the listener must
// be established first, then the user can be changed, and after that,
// the actual http server can be started.
if err := runtimeEnv.DropPrivileges(config.Keys.Group, config.Keys.User); err != nil {
log.Fatalf("error while preparing server start: %s", err.Error())
log.Abortf("Server Start: Error while preparing server start.\nError: %s\n", err.Error())
}
if err = server.Serve(listener); err != nil && err != http.ErrServerClosed {
log.Fatalf("starting server failed: %v", err)
log.Abortf("Server Start: Starting server failed.\nError: %s\n", err.Error())
}
}

View File

@ -17,6 +17,9 @@
60
]
},
"apiAllowedIPs": [
"*"
],
"emission-constant": 317,
"clusters": [
{

View File

@ -1,50 +1,62 @@
{
"addr": "0.0.0.0:443",
"ldap": {
"url": "ldaps://test",
"user_base": "ou=people,ou=hpc,dc=test,dc=de",
"search_dn": "cn=hpcmonitoring,ou=roadm,ou=profile,ou=hpc,dc=test,dc=de",
"user_bind": "uid={username},ou=people,ou=hpc,dc=test,dc=de",
"user_filter": "(&(objectclass=posixAccount))"
},
"https-cert-file": "/etc/letsencrypt/live/url/fullchain.pem",
"https-key-file": "/etc/letsencrypt/live/url/privkey.pem",
"user": "clustercockpit",
"group": "clustercockpit",
"archive": {
"kind": "file",
"path": "./var/job-archive"
},
"validate": true,
"clusters": [
{
"name": "test",
"metricDataRepository": {
"kind": "cc-metric-store",
"url": "http://localhost:8082",
"token": "eyJhbGciOiJF-E-pQBQ"
},
"filterRanges": {
"numNodes": {
"from": 1,
"to": 64
},
"duration": {
"from": 0,
"to": 86400
},
"startTime": {
"from": "2022-01-01T00:00:00Z",
"to": null
}
}
"addr": "0.0.0.0:443",
"ldap": {
"url": "ldaps://test",
"user_base": "ou=people,ou=hpc,dc=test,dc=de",
"search_dn": "cn=hpcmonitoring,ou=roadm,ou=profile,ou=hpc,dc=test,dc=de",
"user_bind": "uid={username},ou=people,ou=hpc,dc=test,dc=de",
"user_filter": "(&(objectclass=posixAccount))"
},
"https-cert-file": "/etc/letsencrypt/live/url/fullchain.pem",
"https-key-file": "/etc/letsencrypt/live/url/privkey.pem",
"user": "clustercockpit",
"group": "clustercockpit",
"archive": {
"kind": "file",
"path": "./var/job-archive"
},
"validate": false,
"apiAllowedIPs": [
"*"
],
"clusters": [
{
"name": "test",
"metricDataRepository": {
"kind": "cc-metric-store",
"url": "http://localhost:8082",
"token": "eyJhbGciOiJF-E-pQBQ"
},
"filterRanges": {
"numNodes": {
"from": 1,
"to": 64
},
"duration": {
"from": 0,
"to": 86400
},
"startTime": {
"from": "2022-01-01T00:00:00Z",
"to": null
}
],
"jwts": {
"cookieName": "",
"validateUser": false,
"max-age": "2000h",
"trustedIssuer": ""
},
"short-running-jobs-duration": 300
}
}
],
"jwts": {
"cookieName": "",
"validateUser": false,
"max-age": "2000h",
"trustedIssuer": ""
},
"enable-resampling": {
"trigger": 30,
"resolutions": [
600,
300,
120,
60
]
},
"short-running-jobs-duration": 300
}

14
go.mod
View File

@ -2,6 +2,8 @@ module github.com/ClusterCockpit/cc-backend
go 1.23.5
toolchain go1.24.1
require (
github.com/99designs/gqlgen v0.17.66
github.com/ClusterCockpit/cc-units v0.4.0
@ -10,7 +12,7 @@ require (
github.com/go-co-op/gocron/v2 v2.16.0
github.com/go-ldap/ldap/v3 v3.4.10
github.com/go-sql-driver/mysql v1.9.0
github.com/golang-jwt/jwt/v5 v5.2.1
github.com/golang-jwt/jwt/v5 v5.2.2
github.com/golang-migrate/migrate/v4 v4.18.2
github.com/google/gops v0.3.28
github.com/gorilla/handlers v1.5.2
@ -26,7 +28,7 @@ require (
github.com/swaggo/http-swagger v1.3.4
github.com/swaggo/swag v1.16.4
github.com/vektah/gqlparser/v2 v2.5.22
golang.org/x/crypto v0.35.0
golang.org/x/crypto v0.36.0
golang.org/x/exp v0.0.0-20250218142911-aa4b98e5adaa
golang.org/x/oauth2 v0.27.0
golang.org/x/time v0.5.0
@ -78,10 +80,10 @@ require (
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
go.uber.org/atomic v1.11.0 // indirect
golang.org/x/mod v0.23.0 // indirect
golang.org/x/net v0.35.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
golang.org/x/net v0.38.0 // indirect
golang.org/x/sync v0.12.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/text v0.23.0 // indirect
golang.org/x/tools v0.30.0 // indirect
google.golang.org/protobuf v1.36.5 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect

24
go.sum
View File

@ -83,8 +83,8 @@ github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIx
github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-migrate/migrate/v4 v4.18.2 h1:2VSCMz7x7mjyTXx3m2zPokOY82LTRgxK1yQYKo6wWQ8=
github.com/golang-migrate/migrate/v4 v4.18.2/go.mod h1:2CM6tJvn2kqPXwnXO/d3rAQYiyoIm180VsO8PRX6Rpk=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
@ -257,8 +257,8 @@ golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliY
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.35.0 h1:b15kiHdrGCHrP6LvwaQ3c03kgNhhiMgvlhxHQhmg2Xs=
golang.org/x/crypto v0.35.0/go.mod h1:dy7dXNW32cAb/6/PRuTNsix8T+vJAqvuIy5Bli/x0YQ=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20250218142911-aa4b98e5adaa h1:t2QcU6V556bFjYgu4L6C+6VrCPyJZ+eyRsABUPs1mz4=
golang.org/x/exp v0.0.0-20250218142911-aa4b98e5adaa/go.mod h1:BHOTPb3L19zxehTsLoJXVaTktb06DFgmdW6Wb9s8jqk=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
@ -279,8 +279,8 @@ golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.27.0 h1:da9Vo7/tDv5RH/7nZDz1eMGS/q1Vv1N/7FCrBhI9I3M=
golang.org/x/oauth2 v0.27.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -290,8 +290,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@ -303,8 +303,8 @@ golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@ -323,8 +323,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@ -45,6 +45,9 @@ func setup(t *testing.T) *api.RestApi {
"jwts": {
"max-age": "2m"
},
"apiAllowedIPs": [
"*"
],
"clusters": [
{
"name": "testcluster",

View File

@ -23,7 +23,7 @@ const docTemplate = `{
"host": "{{.Host}}",
"basePath": "{{.BasePath}}",
"paths": {
"/clusters/": {
"/api/clusters/": {
"get": {
"security": [
{
@ -80,7 +80,7 @@ const docTemplate = `{
}
}
},
"/jobs/": {
"/api/jobs/": {
"get": {
"security": [
{
@ -175,7 +175,7 @@ const docTemplate = `{
}
}
},
"/jobs/delete_job/": {
"/api/jobs/delete_job/": {
"delete": {
"security": [
{
@ -250,7 +250,7 @@ const docTemplate = `{
}
}
},
"/jobs/delete_job/{id}": {
"/api/jobs/delete_job/{id}": {
"delete": {
"security": [
{
@ -320,7 +320,7 @@ const docTemplate = `{
}
}
},
"/jobs/delete_job_before/{ts}": {
"/api/jobs/delete_job_before/{ts}": {
"delete": {
"security": [
{
@ -390,7 +390,7 @@ const docTemplate = `{
}
}
},
"/jobs/edit_meta/{id}": {
"/api/jobs/edit_meta/{id}": {
"post": {
"security": [
{
@ -460,7 +460,7 @@ const docTemplate = `{
}
}
},
"/jobs/start_job/": {
"/api/jobs/start_job/": {
"post": {
"security": [
{
@ -529,7 +529,7 @@ const docTemplate = `{
}
}
},
"/jobs/stop_job/": {
"/api/jobs/stop_job/": {
"post": {
"security": [
{
@ -601,7 +601,7 @@ const docTemplate = `{
}
}
},
"/jobs/tag_job/{id}": {
"/api/jobs/tag_job/{id}": {
"post": {
"security": [
{
@ -674,7 +674,7 @@ const docTemplate = `{
}
}
},
"/jobs/{id}": {
"/api/jobs/{id}": {
"get": {
"security": [
{
@ -833,185 +833,14 @@ const docTemplate = `{
}
}
},
"/notice/": {
"post": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Modifies the content of notice.txt, shown as notice box on the homepage.\nIf more than one formValue is set then only the highest priority field is used.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"consumes": [
"multipart/form-data"
],
"produces": [
"text/plain"
],
"tags": [
"User"
],
"summary": "Updates or empties the notice box content",
"parameters": [
{
"type": "string",
"description": "Priority 1: New content to display",
"name": "new-content",
"in": "formData"
}
],
"responses": {
"200": {
"description": "Success Response Message",
"schema": {
"type": "string"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
}
},
"403": {
"description": "Forbidden",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: The user could not be updated",
"schema": {
"type": "string"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
}
}
}
}
},
"/user/{id}": {
"post": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Modifies user defined by username (id) in one of four possible ways.\nIf more than one formValue is set then only the highest priority field is used.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"consumes": [
"multipart/form-data"
],
"produces": [
"text/plain"
],
"tags": [
"User"
],
"summary": "Updates an existing user",
"parameters": [
{
"type": "string",
"description": "Database ID of User",
"name": "id",
"in": "path",
"required": true
},
{
"enum": [
"admin",
"support",
"manager",
"user",
"api"
],
"type": "string",
"description": "Priority 1: Role to add",
"name": "add-role",
"in": "formData"
},
{
"enum": [
"admin",
"support",
"manager",
"user",
"api"
],
"type": "string",
"description": "Priority 2: Role to remove",
"name": "remove-role",
"in": "formData"
},
{
"type": "string",
"description": "Priority 3: Project to add",
"name": "add-project",
"in": "formData"
},
{
"type": "string",
"description": "Priority 4: Project to remove",
"name": "remove-project",
"in": "formData"
}
],
"responses": {
"200": {
"description": "Success Response Message",
"schema": {
"type": "string"
}
},
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
}
},
"403": {
"description": "Forbidden",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: The user could not be updated",
"schema": {
"type": "string"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
}
}
}
}
},
"/users/": {
"/api/users/": {
"get": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Returns a JSON-encoded list of users.\nRequired query-parameter defines if all users or only users with additional special roles are returned.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"description": "Returns a JSON-encoded list of users.\nRequired query-parameter defines if all users or only users with additional special roles are returned.",
"produces": [
"application/json"
],
@ -1063,70 +892,111 @@ const docTemplate = `{
}
}
}
},
"post": {
}
},
"/jobs/tag_job/{id}": {
"delete": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "User specified in form data will be saved to database.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"description": "Removes tag(s) from a job specified by DB ID. Name and Type of Tag(s) must match.\nTag Scope is required for matching, options: \"global\", \"admin\". Private tags can not be deleted via API.\nIf tagged job is already finished: Tag will be removed from respective archive files.",
"consumes": [
"multipart/form-data"
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"Job add and modify"
],
"summary": "Removes one or more tags from a job",
"parameters": [
{
"type": "integer",
"description": "Job Database ID",
"name": "id",
"in": "path",
"required": true
},
{
"description": "Array of tag-objects to remove",
"name": "request",
"in": "body",
"required": true,
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/api.ApiTag"
}
}
}
],
"responses": {
"200": {
"description": "Updated job resource",
"schema": {
"$ref": "#/definitions/schema.Job"
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
},
"404": {
"description": "Job or tag does not exist",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"$ref": "#/definitions/api.ErrorResponse"
}
}
}
}
},
"/tags/": {
"delete": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "Removes tags by type and name. Name and Type of Tag(s) must match.\nTag Scope is required for matching, options: \"global\", \"admin\". Private tags can not be deleted via API.\nTag wills be removed from respective archive files.",
"consumes": [
"application/json"
],
"produces": [
"text/plain"
],
"tags": [
"User"
"Tag remove"
],
"summary": "Adds a new user",
"summary": "Removes all tags and job-relations for type:name tuple",
"parameters": [
{
"type": "string",
"description": "Unique user ID",
"name": "username",
"in": "formData",
"required": true
},
{
"type": "string",
"description": "User password",
"name": "password",
"in": "formData",
"required": true
},
{
"enum": [
"admin",
"support",
"manager",
"user",
"api"
],
"type": "string",
"description": "User role",
"name": "role",
"in": "formData",
"required": true
},
{
"type": "string",
"description": "Managed project, required for new manager role user",
"name": "project",
"in": "formData"
},
{
"type": "string",
"description": "Users name",
"name": "name",
"in": "formData"
},
{
"type": "string",
"description": "Users email",
"name": "email",
"in": "formData"
"description": "Array of tag-objects to remove",
"name": "request",
"in": "body",
"required": true,
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/api.ApiTag"
}
}
}
],
"responses": {
@ -1139,93 +1009,25 @@ const docTemplate = `{
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
},
"403": {
"description": "Forbidden",
"404": {
"description": "Job or tag does not exist",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: creating user failed",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
}
}
}
},
"delete": {
"security": [
{
"ApiKeyAuth": []
}
],
"description": "User defined by username in form data will be deleted from database.\nOnly accessible from IPs registered with apiAllowedIPs configuration option.",
"consumes": [
"multipart/form-data"
],
"produces": [
"text/plain"
],
"tags": [
"User"
],
"summary": "Deletes a user",
"parameters": [
{
"type": "string",
"description": "User ID to delete",
"name": "username",
"in": "formData",
"required": true
}
],
"responses": {
"200": {
"description": "User deleted successfully"
},
"400": {
"description": "Bad Request",
"schema": {
"type": "string"
}
},
"401": {
"description": "Unauthorized",
"schema": {
"type": "string"
}
},
"403": {
"description": "Forbidden",
"schema": {
"type": "string"
}
},
"422": {
"description": "Unprocessable Entity: deleting user failed",
"schema": {
"type": "string"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"type": "string"
"$ref": "#/definitions/api.ErrorResponse"
}
}
}
@ -2191,7 +1993,7 @@ const docTemplate = `{
var SwaggerInfo = &swag.Spec{
Version: "1.0.0",
Host: "localhost:8080",
BasePath: "/api",
BasePath: "",
Schemes: []string{},
Title: "ClusterCockpit REST API",
Description: "API for batch job control.",

View File

@ -46,7 +46,6 @@ import (
// @license.url https://opensource.org/licenses/MIT
// @host localhost:8080
// @basePath /api
// @securityDefinitions.apikey ApiKeyAuth
// @in header
@ -69,22 +68,27 @@ func New() *RestApi {
func (api *RestApi) MountApiRoutes(r *mux.Router) {
r.StrictSlash(true)
// REST API Uses TokenAuth
// User List
r.HandleFunc("/users/", api.getUsers).Methods(http.MethodGet)
// Cluster List
r.HandleFunc("/clusters/", api.getClusters).Methods(http.MethodGet)
// Job Handler
r.HandleFunc("/jobs/start_job/", api.startJob).Methods(http.MethodPost, http.MethodPut)
r.HandleFunc("/jobs/stop_job/", api.stopJobByRequest).Methods(http.MethodPost, http.MethodPut)
// r.HandleFunc("/jobs/import/", api.importJob).Methods(http.MethodPost, http.MethodPut)
r.HandleFunc("/jobs/", api.getJobs).Methods(http.MethodGet)
r.HandleFunc("/jobs/{id}", api.getJobById).Methods(http.MethodPost)
r.HandleFunc("/jobs/{id}", api.getCompleteJobById).Methods(http.MethodGet)
r.HandleFunc("/jobs/tag_job/{id}", api.tagJob).Methods(http.MethodPost, http.MethodPatch)
r.HandleFunc("/jobs/tag_job/{id}", api.removeTagJob).Methods(http.MethodDelete)
r.HandleFunc("/jobs/edit_meta/{id}", api.editMeta).Methods(http.MethodPost, http.MethodPatch)
r.HandleFunc("/jobs/metrics/{id}", api.getJobMetrics).Methods(http.MethodGet)
r.HandleFunc("/jobs/delete_job/", api.deleteJobByRequest).Methods(http.MethodDelete)
r.HandleFunc("/jobs/delete_job/{id}", api.deleteJobById).Methods(http.MethodDelete)
r.HandleFunc("/jobs/delete_job_before/{ts}", api.deleteJobBefore).Methods(http.MethodDelete)
r.HandleFunc("/clusters/", api.getClusters).Methods(http.MethodGet)
r.HandleFunc("/tags/", api.removeTags).Methods(http.MethodDelete)
if api.MachineStateDir != "" {
r.HandleFunc("/machine_state/{cluster}/{host}", api.getMachineState).Methods(http.MethodGet)
@ -94,7 +98,7 @@ func (api *RestApi) MountApiRoutes(r *mux.Router) {
func (api *RestApi) MountUserApiRoutes(r *mux.Router) {
r.StrictSlash(true)
// REST API Uses TokenAuth
r.HandleFunc("/jobs/", api.getJobs).Methods(http.MethodGet)
r.HandleFunc("/jobs/{id}", api.getJobById).Methods(http.MethodPost)
r.HandleFunc("/jobs/{id}", api.getCompleteJobById).Methods(http.MethodGet)
@ -103,7 +107,7 @@ func (api *RestApi) MountUserApiRoutes(r *mux.Router) {
func (api *RestApi) MountConfigApiRoutes(r *mux.Router) {
r.StrictSlash(true)
// Settings Frontend Uses SessionAuth
if api.Authentication != nil {
r.HandleFunc("/roles/", api.getRoles).Methods(http.MethodGet)
r.HandleFunc("/users/", api.createUser).Methods(http.MethodPost, http.MethodPut)
@ -116,7 +120,7 @@ func (api *RestApi) MountConfigApiRoutes(r *mux.Router) {
func (api *RestApi) MountFrontendApiRoutes(r *mux.Router) {
r.StrictSlash(true)
// Settings Frontrend Uses SessionAuth
if api.Authentication != nil {
r.HandleFunc("/jwt/", api.getJWT).Methods(http.MethodGet)
r.HandleFunc("/configuration/", api.updateConfiguration).Methods(http.MethodPost)
@ -215,50 +219,12 @@ func handleError(err error, statusCode int, rw http.ResponseWriter) {
})
}
func decode(r io.Reader, val interface{}) error {
func decode(r io.Reader, val any) error {
dec := json.NewDecoder(r)
dec.DisallowUnknownFields()
return dec.Decode(val)
}
func securedCheck(r *http.Request) error {
user := repository.GetUserFromContext(r.Context())
if user == nil {
return fmt.Errorf("no user in context")
}
if user.AuthType == schema.AuthToken {
// If nothing declared in config: deny all request to this endpoint
if config.Keys.ApiAllowedIPs == nil || len(config.Keys.ApiAllowedIPs) == 0 {
return fmt.Errorf("missing configuration key ApiAllowedIPs")
}
if config.Keys.ApiAllowedIPs[0] == "*" {
return nil
}
// extract IP address
IPAddress := r.Header.Get("X-Real-Ip")
if IPAddress == "" {
IPAddress = r.Header.Get("X-Forwarded-For")
}
if IPAddress == "" {
IPAddress = r.RemoteAddr
}
if strings.Contains(IPAddress, ":") {
IPAddress = strings.Split(IPAddress, ":")[0]
}
// check if IP is allowed
if !util.Contains(config.Keys.ApiAllowedIPs, IPAddress) {
return fmt.Errorf("unknown ip: %v", IPAddress)
}
}
return nil
}
// getClusters godoc
// @summary Lists all cluster configs
// @tags Cluster query
@ -271,7 +237,7 @@ func securedCheck(r *http.Request) error {
// @failure 403 {object} api.ErrorResponse "Forbidden"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /clusters/ [get]
// @router /api/clusters/ [get]
func (api *RestApi) getClusters(rw http.ResponseWriter, r *http.Request) {
if user := repository.GetUserFromContext(r.Context()); user != nil &&
!user.HasRole(schema.RoleApi) {
@ -326,7 +292,7 @@ func (api *RestApi) getClusters(rw http.ResponseWriter, r *http.Request) {
// @failure 403 {object} api.ErrorResponse "Forbidden"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/ [get]
// @router /api/jobs/ [get]
func (api *RestApi) getJobs(rw http.ResponseWriter, r *http.Request) {
withMetadata := false
filter := &model.JobFilter{}
@ -460,7 +426,7 @@ func (api *RestApi) getJobs(rw http.ResponseWriter, r *http.Request) {
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: finding job failed: sql: no rows in result set"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/{id} [get]
// @router /api/jobs/{id} [get]
func (api *RestApi) getCompleteJobById(rw http.ResponseWriter, r *http.Request) {
// Fetch job from db
id, ok := mux.Vars(r)["id"]
@ -553,7 +519,7 @@ func (api *RestApi) getCompleteJobById(rw http.ResponseWriter, r *http.Request)
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: finding job failed: sql: no rows in result set"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/{id} [post]
// @router /api/jobs/{id} [post]
func (api *RestApi) getJobById(rw http.ResponseWriter, r *http.Request) {
// Fetch job from db
id, ok := mux.Vars(r)["id"]
@ -657,7 +623,7 @@ func (api *RestApi) getJobById(rw http.ResponseWriter, r *http.Request) {
// @failure 404 {object} api.ErrorResponse "Job does not exist"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/edit_meta/{id} [post]
// @router /api/jobs/edit_meta/{id} [post]
func (api *RestApi) editMeta(rw http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(mux.Vars(r)["id"], 10, 64)
if err != nil {
@ -703,7 +669,7 @@ func (api *RestApi) editMeta(rw http.ResponseWriter, r *http.Request) {
// @failure 404 {object} api.ErrorResponse "Job or tag does not exist"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/tag_job/{id} [post]
// @router /api/jobs/tag_job/{id} [post]
func (api *RestApi) tagJob(rw http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(mux.Vars(r)["id"], 10, 64)
if err != nil {
@ -749,6 +715,114 @@ func (api *RestApi) tagJob(rw http.ResponseWriter, r *http.Request) {
json.NewEncoder(rw).Encode(job)
}
// removeTagJob godoc
// @summary Removes one or more tags from a job
// @tags Job add and modify
// @description Removes tag(s) from a job specified by DB ID. Name and Type of Tag(s) must match.
// @description Tag Scope is required for matching, options: "global", "admin". Private tags can not be deleted via API.
// @description If tagged job is already finished: Tag will be removed from respective archive files.
// @accept json
// @produce json
// @param id path int true "Job Database ID"
// @param request body api.TagJobApiRequest true "Array of tag-objects to remove"
// @success 200 {object} schema.Job "Updated job resource"
// @failure 400 {object} api.ErrorResponse "Bad Request"
// @failure 401 {object} api.ErrorResponse "Unauthorized"
// @failure 404 {object} api.ErrorResponse "Job or tag does not exist"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/tag_job/{id} [delete]
func (api *RestApi) removeTagJob(rw http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(mux.Vars(r)["id"], 10, 64)
if err != nil {
http.Error(rw, err.Error(), http.StatusBadRequest)
return
}
job, err := api.JobRepository.FindById(r.Context(), id)
if err != nil {
http.Error(rw, err.Error(), http.StatusNotFound)
return
}
job.Tags, err = api.JobRepository.GetTags(repository.GetUserFromContext(r.Context()), &job.ID)
if err != nil {
http.Error(rw, err.Error(), http.StatusInternalServerError)
return
}
var req TagJobApiRequest
if err := decode(r.Body, &req); err != nil {
http.Error(rw, err.Error(), http.StatusBadRequest)
return
}
for _, rtag := range req {
// Only Global and Admin Tags
if rtag.Scope != "global" && rtag.Scope != "admin" {
log.Warnf("Cannot delete private tag for job %d: Skip", job.JobID)
continue
}
remainingTags, err := api.JobRepository.RemoveJobTagByRequest(repository.GetUserFromContext(r.Context()), job.ID, rtag.Type, rtag.Name, rtag.Scope)
if err != nil {
http.Error(rw, err.Error(), http.StatusInternalServerError)
return
}
job.Tags = remainingTags
}
rw.Header().Add("Content-Type", "application/json")
rw.WriteHeader(http.StatusOK)
json.NewEncoder(rw).Encode(job)
}
// removeTags godoc
// @summary Removes all tags and job-relations for type:name tuple
// @tags Tag remove
// @description Removes tags by type and name. Name and Type of Tag(s) must match.
// @description Tag Scope is required for matching, options: "global", "admin". Private tags can not be deleted via API.
// @description Tag wills be removed from respective archive files.
// @accept json
// @produce plain
// @param request body api.TagJobApiRequest true "Array of tag-objects to remove"
// @success 200 {string} string "Success Response"
// @failure 400 {object} api.ErrorResponse "Bad Request"
// @failure 401 {object} api.ErrorResponse "Unauthorized"
// @failure 404 {object} api.ErrorResponse "Job or tag does not exist"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /tags/ [delete]
func (api *RestApi) removeTags(rw http.ResponseWriter, r *http.Request) {
var req TagJobApiRequest
if err := decode(r.Body, &req); err != nil {
http.Error(rw, err.Error(), http.StatusBadRequest)
return
}
targetCount := len(req)
currentCount := 0
for _, rtag := range req {
// Only Global and Admin Tags
if rtag.Scope != "global" && rtag.Scope != "admin" {
log.Warn("Cannot delete private tag: Skip")
continue
}
err := api.JobRepository.RemoveTagByRequest(rtag.Type, rtag.Name, rtag.Scope)
if err != nil {
http.Error(rw, err.Error(), http.StatusInternalServerError)
return
} else {
currentCount++
}
}
rw.WriteHeader(http.StatusOK)
rw.Write([]byte(fmt.Sprintf("Deleted Tags from DB: %d successfull of %d requested\n", currentCount, targetCount)))
}
// startJob godoc
// @summary Adds a new job as "running"
// @tags Job add and modify
@ -764,7 +838,7 @@ func (api *RestApi) tagJob(rw http.ResponseWriter, r *http.Request) {
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: The combination of jobId, clusterId and startTime does already exist"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/start_job/ [post]
// @router /api/jobs/start_job/ [post]
func (api *RestApi) startJob(rw http.ResponseWriter, r *http.Request) {
req := schema.JobMeta{BaseJob: schema.JobDefaults}
if err := decode(r.Body, &req); err != nil {
@ -837,7 +911,7 @@ func (api *RestApi) startJob(rw http.ResponseWriter, r *http.Request) {
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: job has already been stopped"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/stop_job/ [post]
// @router /api/jobs/stop_job/ [post]
func (api *RestApi) stopJobByRequest(rw http.ResponseWriter, r *http.Request) {
// Parse request body
req := StopJobApiRequest{}
@ -878,7 +952,7 @@ func (api *RestApi) stopJobByRequest(rw http.ResponseWriter, r *http.Request) {
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: finding job failed: sql: no rows in result set"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/delete_job/{id} [delete]
// @router /api/jobs/delete_job/{id} [delete]
func (api *RestApi) deleteJobById(rw http.ResponseWriter, r *http.Request) {
// Fetch job (that will be stopped) from db
id, ok := mux.Vars(r)["id"]
@ -921,7 +995,7 @@ func (api *RestApi) deleteJobById(rw http.ResponseWriter, r *http.Request) {
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: finding job failed: sql: no rows in result set"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/delete_job/ [delete]
// @router /api/jobs/delete_job/ [delete]
func (api *RestApi) deleteJobByRequest(rw http.ResponseWriter, r *http.Request) {
// Parse request body
req := DeleteJobApiRequest{}
@ -971,7 +1045,7 @@ func (api *RestApi) deleteJobByRequest(rw http.ResponseWriter, r *http.Request)
// @failure 422 {object} api.ErrorResponse "Unprocessable Entity: finding job failed: sql: no rows in result set"
// @failure 500 {object} api.ErrorResponse "Internal Server Error"
// @security ApiKeyAuth
// @router /jobs/delete_job_before/{ts} [delete]
// @router /api/jobs/delete_job_before/{ts} [delete]
func (api *RestApi) deleteJobBefore(rw http.ResponseWriter, r *http.Request) {
var cnt int
// Fetch job (that will be stopped) from db
@ -1089,33 +1163,8 @@ func (api *RestApi) getJobMetrics(rw http.ResponseWriter, r *http.Request) {
})
}
// createUser godoc
// @summary Adds a new user
// @tags User
// @description User specified in form data will be saved to database.
// @description Only accessible from IPs registered with apiAllowedIPs configuration option.
// @accept mpfd
// @produce plain
// @param username formData string true "Unique user ID"
// @param password formData string true "User password"
// @param role formData string true "User role" Enums(admin, support, manager, user, api)
// @param project formData string false "Managed project, required for new manager role user"
// @param name formData string false "Users name"
// @param email formData string false "Users email"
// @success 200 {string} string "Success Response"
// @failure 400 {string} string "Bad Request"
// @failure 401 {string} string "Unauthorized"
// @failure 403 {string} string "Forbidden"
// @failure 422 {string} string "Unprocessable Entity: creating user failed"
// @failure 500 {string} string "Internal Server Error"
// @security ApiKeyAuth
// @router /users/ [post]
func (api *RestApi) createUser(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
// SecuredCheck() only worked with TokenAuth: Removed
rw.Header().Set("Content-Type", "text/plain")
me := repository.GetUserFromContext(r.Context())
@ -1158,28 +1207,8 @@ func (api *RestApi) createUser(rw http.ResponseWriter, r *http.Request) {
fmt.Fprintf(rw, "User %v successfully created!\n", username)
}
// deleteUser godoc
// @summary Deletes a user
// @tags User
// @description User defined by username in form data will be deleted from database.
// @description Only accessible from IPs registered with apiAllowedIPs configuration option.
// @accept mpfd
// @produce plain
// @param username formData string true "User ID to delete"
// @success 200 "User deleted successfully"
// @failure 400 {string} string "Bad Request"
// @failure 401 {string} string "Unauthorized"
// @failure 403 {string} string "Forbidden"
// @failure 422 {string} string "Unprocessable Entity: deleting user failed"
// @failure 500 {string} string "Internal Server Error"
// @security ApiKeyAuth
// @router /users/ [delete]
func (api *RestApi) deleteUser(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
// SecuredCheck() only worked with TokenAuth: Removed
if user := repository.GetUserFromContext(r.Context()); !user.HasRole(schema.RoleAdmin) {
http.Error(rw, "Only admins are allowed to delete a user", http.StatusForbidden)
@ -1200,7 +1229,6 @@ func (api *RestApi) deleteUser(rw http.ResponseWriter, r *http.Request) {
// @tags User
// @description Returns a JSON-encoded list of users.
// @description Required query-parameter defines if all users or only users with additional special roles are returned.
// @description Only accessible from IPs registered with apiAllowedIPs configuration option.
// @produce json
// @param not-just-user query bool true "If returned list should contain all users or only users with additional special roles"
// @success 200 {array} api.ApiReturnedUser "List of users returned successfully"
@ -1209,13 +1237,9 @@ func (api *RestApi) deleteUser(rw http.ResponseWriter, r *http.Request) {
// @failure 403 {string} string "Forbidden"
// @failure 500 {string} string "Internal Server Error"
// @security ApiKeyAuth
// @router /users/ [get]
// @router /api/users/ [get]
func (api *RestApi) getUsers(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
// SecuredCheck() only worked with TokenAuth: Removed
if user := repository.GetUserFromContext(r.Context()); !user.HasRole(schema.RoleAdmin) {
http.Error(rw, "Only admins are allowed to fetch a list of users", http.StatusForbidden)
@ -1231,33 +1255,8 @@ func (api *RestApi) getUsers(rw http.ResponseWriter, r *http.Request) {
json.NewEncoder(rw).Encode(users)
}
// updateUser godoc
// @summary Updates an existing user
// @tags User
// @description Modifies user defined by username (id) in one of four possible ways.
// @description If more than one formValue is set then only the highest priority field is used.
// @description Only accessible from IPs registered with apiAllowedIPs configuration option.
// @accept mpfd
// @produce plain
// @param id path string true "Database ID of User"
// @param add-role formData string false "Priority 1: Role to add" Enums(admin, support, manager, user, api)
// @param remove-role formData string false "Priority 2: Role to remove" Enums(admin, support, manager, user, api)
// @param add-project formData string false "Priority 3: Project to add"
// @param remove-project formData string false "Priority 4: Project to remove"
// @success 200 {string} string "Success Response Message"
// @failure 400 {string} string "Bad Request"
// @failure 401 {string} string "Unauthorized"
// @failure 403 {string} string "Forbidden"
// @failure 422 {string} string "Unprocessable Entity: The user could not be updated"
// @failure 500 {string} string "Internal Server Error"
// @security ApiKeyAuth
// @router /user/{id} [post]
func (api *RestApi) updateUser(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
// SecuredCheck() only worked with TokenAuth: Removed
if user := repository.GetUserFromContext(r.Context()); !user.HasRole(schema.RoleAdmin) {
http.Error(rw, "Only admins are allowed to update a user", http.StatusForbidden)
@ -1300,29 +1299,8 @@ func (api *RestApi) updateUser(rw http.ResponseWriter, r *http.Request) {
}
}
// editNotice godoc
// @summary Updates or empties the notice box content
// @tags User
// @description Modifies the content of notice.txt, shown as notice box on the homepage.
// @description If more than one formValue is set then only the highest priority field is used.
// @description Only accessible from IPs registered with apiAllowedIPs configuration option.
// @accept mpfd
// @produce plain
// @param new-content formData string false "Priority 1: New content to display"
// @success 200 {string} string "Success Response Message"
// @failure 400 {string} string "Bad Request"
// @failure 401 {string} string "Unauthorized"
// @failure 403 {string} string "Forbidden"
// @failure 422 {string} string "Unprocessable Entity: The user could not be updated"
// @failure 500 {string} string "Internal Server Error"
// @security ApiKeyAuth
// @router /notice/ [post]
func (api *RestApi) editNotice(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
// SecuredCheck() only worked with TokenAuth: Removed
if user := repository.GetUserFromContext(r.Context()); !user.HasRole(schema.RoleAdmin) {
http.Error(rw, "Only admins are allowed to update the notice.txt file", http.StatusForbidden)
@ -1364,12 +1342,6 @@ func (api *RestApi) editNotice(rw http.ResponseWriter, r *http.Request) {
}
func (api *RestApi) getJWT(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
rw.Header().Set("Content-Type", "text/plain")
username := r.FormValue("username")
me := repository.GetUserFromContext(r.Context())
@ -1398,11 +1370,7 @@ func (api *RestApi) getJWT(rw http.ResponseWriter, r *http.Request) {
}
func (api *RestApi) getRoles(rw http.ResponseWriter, r *http.Request) {
err := securedCheck(r)
if err != nil {
http.Error(rw, err.Error(), http.StatusForbidden)
return
}
// SecuredCheck() only worked with TokenAuth: Removed
user := repository.GetUserFromContext(r.Context())
if !user.HasRole(schema.RoleAdmin) {
@ -1423,8 +1391,6 @@ func (api *RestApi) updateConfiguration(rw http.ResponseWriter, r *http.Request)
rw.Header().Set("Content-Type", "text/plain")
key, value := r.FormValue("key"), r.FormValue("value")
// fmt.Printf("REST > KEY: %#v\nVALUE: %#v\n", key, value)
if err := repository.GetUserCfgRepo().UpdateConfig(key, value, repository.GetUserFromContext(r.Context())); err != nil {
http.Error(rw, err.Error(), http.StatusUnprocessableEntity)
return

View File

@ -10,9 +10,11 @@ import (
"database/sql"
"encoding/base64"
"errors"
"fmt"
"net"
"net/http"
"os"
"strings"
"sync"
"time"
@ -20,6 +22,7 @@ import (
"github.com/ClusterCockpit/cc-backend/internal/config"
"github.com/ClusterCockpit/cc-backend/internal/repository"
"github.com/ClusterCockpit/cc-backend/internal/util"
"github.com/ClusterCockpit/cc-backend/pkg/log"
"github.com/ClusterCockpit/cc-backend/pkg/schema"
"github.com/gorilla/sessions"
@ -233,9 +236,9 @@ func (auth *Authentication) Login(
limiter := getIPUserLimiter(ip, username)
if !limiter.Allow() {
log.Warnf("AUTH/RATE > Too many login attempts for combination IP: %s, Username: %s", ip, username)
onfailure(rw, r, errors.New("Too many login attempts, try again in a few minutes."))
return
log.Warnf("AUTH/RATE > Too many login attempts for combination IP: %s, Username: %s", ip, username)
onfailure(rw, r, errors.New("Too many login attempts, try again in a few minutes."))
return
}
var dbUser *schema.User
@ -325,6 +328,14 @@ func (auth *Authentication) AuthApi(
onfailure(rw, r, err)
return
}
ipErr := securedCheck(user, r)
if ipErr != nil {
log.Infof("auth api -> secured check failed: %s", ipErr.Error())
onfailure(rw, r, ipErr)
return
}
if user != nil {
switch {
case len(user.Roles) == 1:
@ -360,6 +371,7 @@ func (auth *Authentication) AuthUserApi(
onfailure(rw, r, err)
return
}
if user != nil {
switch {
case len(user.Roles) == 1:
@ -445,3 +457,38 @@ func (auth *Authentication) Logout(onsuccess http.Handler) http.Handler {
onsuccess.ServeHTTP(rw, r)
})
}
// Helper Moved To MiddleWare Auth Handlers
func securedCheck(user *schema.User, r *http.Request) error {
if user == nil {
return fmt.Errorf("no user for secured check")
}
// extract IP address for checking
IPAddress := r.Header.Get("X-Real-Ip")
if IPAddress == "" {
IPAddress = r.Header.Get("X-Forwarded-For")
}
if IPAddress == "" {
IPAddress = r.RemoteAddr
}
if strings.Contains(IPAddress, ":") {
IPAddress = strings.Split(IPAddress, ":")[0]
}
// If nothing declared in config: deny all request to this api endpoint
if len(config.Keys.ApiAllowedIPs) == 0 {
return fmt.Errorf("missing configuration key ApiAllowedIPs")
}
// If wildcard declared in config: Continue
if config.Keys.ApiAllowedIPs[0] == "*" {
return nil
}
// check if IP is allowed
if !util.Contains(config.Keys.ApiAllowedIPs, IPAddress) {
return fmt.Errorf("unknown ip: %v", IPAddress)
}
return nil
}

View File

@ -7,9 +7,9 @@ package config
import (
"bytes"
"encoding/json"
"log"
"os"
"github.com/ClusterCockpit/cc-backend/pkg/log"
"github.com/ClusterCockpit/cc-backend/pkg/schema"
)
@ -53,20 +53,20 @@ func Init(flagConfigFile string) {
raw, err := os.ReadFile(flagConfigFile)
if err != nil {
if !os.IsNotExist(err) {
log.Fatalf("CONFIG ERROR: %v", err)
log.Abortf("Config Init: Could not read config file '%s'.\nError: %s\n", flagConfigFile, err.Error())
}
} else {
if err := schema.Validate(schema.Config, bytes.NewReader(raw)); err != nil {
log.Fatalf("Validate config: %v\n", err)
log.Abortf("Config Init: Could not validate config file '%s'.\nError: %s\n", flagConfigFile, err.Error())
}
dec := json.NewDecoder(bytes.NewReader(raw))
dec.DisallowUnknownFields()
if err := dec.Decode(&Keys); err != nil {
log.Fatalf("could not decode: %v", err)
log.Abortf("Config Init: Could not decode config file '%s'.\nError: %s\n", flagConfigFile, err.Error())
}
if Keys.Clusters == nil || len(Keys.Clusters) < 1 {
log.Fatal("At least one cluster required in config!")
log.Abort("Config Init: At least one cluster required in config. Exited with error.")
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -81,11 +81,6 @@ type JobLinkResultList struct {
Count *int `json:"count,omitempty"`
}
type JobMetricStatWithName struct {
Name string `json:"name"`
Stats *schema.MetricStatistics `json:"stats"`
}
type JobMetricWithName struct {
Name string `json:"name"`
Scope schema.MetricScope `json:"scope"`
@ -100,6 +95,17 @@ type JobResultList struct {
HasNextPage *bool `json:"hasNextPage,omitempty"`
}
type JobStats struct {
Name string `json:"name"`
Stats *schema.MetricStatistics `json:"stats"`
}
type JobStatsWithScope struct {
Name string `json:"name"`
Scope schema.MetricScope `json:"scope"`
Stats []*ScopedStats `json:"stats"`
}
type JobsStatistics struct {
ID string `json:"id"`
Name string `json:"name"`
@ -173,6 +179,12 @@ type PageRequest struct {
Page int `json:"page"`
}
type ScopedStats struct {
Hostname string `json:"hostname"`
ID *string `json:"id,omitempty"`
Data *schema.MetricStatistics `json:"data"`
}
type StringInput struct {
Eq *string `json:"eq,omitempty"`
Neq *string `json:"neq,omitempty"`

View File

@ -125,23 +125,41 @@ func (r *metricValueResolver) Name(ctx context.Context, obj *schema.MetricValue)
// CreateTag is the resolver for the createTag field.
func (r *mutationResolver) CreateTag(ctx context.Context, typeArg string, name string, scope string) (*schema.Tag, error) {
id, err := r.Repo.CreateTag(typeArg, name, scope)
if err != nil {
log.Warn("Error while creating tag")
return nil, err
user := repository.GetUserFromContext(ctx)
if user == nil {
return nil, fmt.Errorf("no user in context")
}
return &schema.Tag{ID: id, Type: typeArg, Name: name, Scope: scope}, nil
// Test Access: Admins && Admin Tag OR Support/Admin and Global Tag OR Everyone && Private Tag
if user.HasRole(schema.RoleAdmin) && scope == "admin" ||
user.HasAnyRole([]schema.Role{schema.RoleAdmin, schema.RoleSupport}) && scope == "global" ||
user.Username == scope {
// Create in DB
id, err := r.Repo.CreateTag(typeArg, name, scope)
if err != nil {
log.Warn("Error while creating tag")
return nil, err
}
return &schema.Tag{ID: id, Type: typeArg, Name: name, Scope: scope}, nil
} else {
log.Warnf("Not authorized to create tag with scope: %s", scope)
return nil, fmt.Errorf("Not authorized to create tag with scope: %s", scope)
}
}
// DeleteTag is the resolver for the deleteTag field.
func (r *mutationResolver) DeleteTag(ctx context.Context, id string) (string, error) {
// This Uses ID string <-> ID string, removeTagFromList uses []string <-> []int
panic(fmt.Errorf("not implemented: DeleteTag - deleteTag"))
}
// AddTagsToJob is the resolver for the addTagsToJob field.
func (r *mutationResolver) AddTagsToJob(ctx context.Context, job string, tagIds []string) ([]*schema.Tag, error) {
// Selectable Tags Pre-Filtered by Scope in Frontend: No backend check required
user := repository.GetUserFromContext(ctx)
if user == nil {
return nil, fmt.Errorf("no user in context")
}
jid, err := strconv.ParseInt(job, 10, 64)
if err != nil {
log.Warn("Error while adding tag to job")
@ -150,15 +168,32 @@ func (r *mutationResolver) AddTagsToJob(ctx context.Context, job string, tagIds
tags := []*schema.Tag{}
for _, tagId := range tagIds {
// Get ID
tid, err := strconv.ParseInt(tagId, 10, 64)
if err != nil {
log.Warn("Error while parsing tag id")
return nil, err
}
if tags, err = r.Repo.AddTag(repository.GetUserFromContext(ctx), jid, tid); err != nil {
log.Warn("Error while adding tag")
return nil, err
// Test Exists
_, _, tscope, exists := r.Repo.TagInfo(tid)
if !exists {
log.Warnf("Tag does not exist (ID): %d", tid)
return nil, fmt.Errorf("Tag does not exist (ID): %d", tid)
}
// Test Access: Admins && Admin Tag OR Support/Admin and Global Tag OR Everyone && Private Tag
if user.HasRole(schema.RoleAdmin) && tscope == "admin" ||
user.HasAnyRole([]schema.Role{schema.RoleAdmin, schema.RoleSupport}) && tscope == "global" ||
user.Username == tscope {
// Add to Job
if tags, err = r.Repo.AddTag(user, jid, tid); err != nil {
log.Warn("Error while adding tag")
return nil, err
}
} else {
log.Warnf("Not authorized to add tag: %d", tid)
return nil, fmt.Errorf("Not authorized to add tag: %d", tid)
}
}
@ -167,7 +202,11 @@ func (r *mutationResolver) AddTagsToJob(ctx context.Context, job string, tagIds
// RemoveTagsFromJob is the resolver for the removeTagsFromJob field.
func (r *mutationResolver) RemoveTagsFromJob(ctx context.Context, job string, tagIds []string) ([]*schema.Tag, error) {
// Removable Tags Pre-Filtered by Scope in Frontend: No backend check required
user := repository.GetUserFromContext(ctx)
if user == nil {
return nil, fmt.Errorf("no user in context")
}
jid, err := strconv.ParseInt(job, 10, 64)
if err != nil {
log.Warn("Error while parsing job id")
@ -176,21 +215,80 @@ func (r *mutationResolver) RemoveTagsFromJob(ctx context.Context, job string, ta
tags := []*schema.Tag{}
for _, tagId := range tagIds {
// Get ID
tid, err := strconv.ParseInt(tagId, 10, 64)
if err != nil {
log.Warn("Error while parsing tag id")
return nil, err
}
if tags, err = r.Repo.RemoveTag(repository.GetUserFromContext(ctx), jid, tid); err != nil {
log.Warn("Error while removing tag")
return nil, err
// Test Exists
_, _, tscope, exists := r.Repo.TagInfo(tid)
if !exists {
log.Warnf("Tag does not exist (ID): %d", tid)
return nil, fmt.Errorf("Tag does not exist (ID): %d", tid)
}
// Test Access: Admins && Admin Tag OR Support/Admin and Global Tag OR Everyone && Private Tag
if user.HasRole(schema.RoleAdmin) && tscope == "admin" ||
user.HasAnyRole([]schema.Role{schema.RoleAdmin, schema.RoleSupport}) && tscope == "global" ||
user.Username == tscope {
// Remove from Job
if tags, err = r.Repo.RemoveTag(user, jid, tid); err != nil {
log.Warn("Error while removing tag")
return nil, err
}
} else {
log.Warnf("Not authorized to remove tag: %d", tid)
return nil, fmt.Errorf("Not authorized to remove tag: %d", tid)
}
}
return tags, nil
}
// RemoveTagFromList is the resolver for the removeTagFromList field.
func (r *mutationResolver) RemoveTagFromList(ctx context.Context, tagIds []string) ([]int, error) {
// Needs Contextuser
user := repository.GetUserFromContext(ctx)
if user == nil {
return nil, fmt.Errorf("no user in context")
}
tags := []int{}
for _, tagId := range tagIds {
// Get ID
tid, err := strconv.ParseInt(tagId, 10, 64)
if err != nil {
log.Warn("Error while parsing tag id for removal")
return nil, err
}
// Test Exists
_, _, tscope, exists := r.Repo.TagInfo(tid)
if !exists {
log.Warnf("Tag does not exist (ID): %d", tid)
return nil, fmt.Errorf("Tag does not exist (ID): %d", tid)
}
// Test Access: Admins && Admin Tag OR Everyone && Private Tag
if user.HasRole(schema.RoleAdmin) && (tscope == "global" || tscope == "admin") || user.Username == tscope {
// Remove from DB
if err = r.Repo.RemoveTagById(tid); err != nil {
log.Warn("Error while removing tag")
return nil, err
} else {
tags = append(tags, int(tid))
}
} else {
log.Warnf("Not authorized to remove tag: %d", tid)
return nil, fmt.Errorf("Not authorized to remove tag: %d", tid)
}
}
return tags, nil
}
// UpdateConfiguration is the resolver for the updateConfiguration field.
func (r *mutationResolver) UpdateConfiguration(ctx context.Context, name string, value string) (*string, error) {
if err := repository.GetUserCfgRepo().UpdateConfig(name, value, repository.GetUserFromContext(ctx)); err != nil {
@ -301,24 +399,23 @@ func (r *queryResolver) JobMetrics(ctx context.Context, id string, metrics []str
return res, err
}
// JobMetricStats is the resolver for the jobMetricStats field.
func (r *queryResolver) JobMetricStats(ctx context.Context, id string, metrics []string) ([]*model.JobMetricStatWithName, error) {
// JobStats is the resolver for the jobStats field.
func (r *queryResolver) JobStats(ctx context.Context, id string, metrics []string) ([]*model.JobStats, error) {
job, err := r.Query().Job(ctx, id)
if err != nil {
log.Warn("Error while querying job for metrics")
log.Warnf("Error while querying job %s for metadata", id)
return nil, err
}
data, err := metricDataDispatcher.LoadStatData(job, metrics, ctx)
data, err := metricDataDispatcher.LoadJobStats(job, metrics, ctx)
if err != nil {
log.Warn("Error while loading job stat data")
log.Warnf("Error while loading jobStats data for job id %s", id)
return nil, err
}
res := []*model.JobMetricStatWithName{}
res := []*model.JobStats{}
for name, md := range data {
res = append(res, &model.JobMetricStatWithName{
res = append(res, &model.JobStats{
Name: name,
Stats: &md,
})
@ -327,6 +424,44 @@ func (r *queryResolver) JobMetricStats(ctx context.Context, id string, metrics [
return res, err
}
// ScopedJobStats is the resolver for the scopedJobStats field.
func (r *queryResolver) ScopedJobStats(ctx context.Context, id string, metrics []string, scopes []schema.MetricScope) ([]*model.JobStatsWithScope, error) {
job, err := r.Query().Job(ctx, id)
if err != nil {
log.Warnf("Error while querying job %s for metadata", id)
return nil, err
}
data, err := metricDataDispatcher.LoadScopedJobStats(job, metrics, scopes, ctx)
if err != nil {
log.Warnf("Error while loading scopedJobStats data for job id %s", id)
return nil, err
}
res := make([]*model.JobStatsWithScope, 0)
for name, scoped := range data {
for scope, stats := range scoped {
mdlStats := make([]*model.ScopedStats, 0)
for _, stat := range stats {
mdlStats = append(mdlStats, &model.ScopedStats{
Hostname: stat.Hostname,
ID: stat.Id,
Data: stat.Data,
})
}
res = append(res, &model.JobStatsWithScope{
Name: name,
Scope: scope,
Stats: mdlStats,
})
}
}
return res, nil
}
// JobsFootprints is the resolver for the jobsFootprints field.
func (r *queryResolver) JobsFootprints(ctx context.Context, filter []*model.JobFilter, metrics []string) (*model.Footprints, error) {
// NOTE: Legacy Naming! This resolver is for normalized histograms in analysis view only - *Not* related to DB "footprint" column!
@ -354,30 +489,28 @@ func (r *queryResolver) Jobs(ctx context.Context, filter []*model.JobFilter, pag
return nil, err
}
if !config.Keys.UiDefaults["job_list_usePaging"].(bool) {
hasNextPage := false
// page.Page += 1 : Simple, but expensive
// Example Page 4 @ 10 IpP : Does item 41 exist?
// Minimal Page 41 @ 1 IpP : If len(result) is 1, Page 5 @ 10 IpP exists.
nextPage := &model.PageRequest{
ItemsPerPage: 1,
Page: ((page.Page * page.ItemsPerPage) + 1),
}
nextJobs, err := r.Repo.QueryJobs(ctx, filter, nextPage, order)
if err != nil {
log.Warn("Error while querying next jobs")
return nil, err
}
if len(nextJobs) == 1 {
hasNextPage = true
}
return &model.JobResultList{Items: jobs, Count: &count, HasNextPage: &hasNextPage}, nil
} else {
return &model.JobResultList{Items: jobs, Count: &count}, nil
// Note: Even if App-Default 'config.Keys.UiDefaults["job_list_usePaging"]' is set, always return hasNextPage boolean.
// Users can decide in frontend to use continuous scroll, even if app-default is paging!
/*
Example Page 4 @ 10 IpP : Does item 41 exist?
Minimal Page 41 @ 1 IpP : If len(result) is 1, Page 5 @ 10 IpP exists.
*/
nextPage := &model.PageRequest{
ItemsPerPage: 1,
Page: ((page.Page * page.ItemsPerPage) + 1),
}
nextJobs, err := r.Repo.QueryJobs(ctx, filter, nextPage, order)
if err != nil {
log.Warn("Error while querying next jobs")
return nil, err
}
hasNextPage := false
if len(nextJobs) == 1 {
hasNextPage = true
}
return &model.JobResultList{Items: jobs, Count: &count, HasNextPage: &hasNextPage}, nil
}
// JobsStatistics is the resolver for the jobsStatistics field.

View File

@ -45,6 +45,9 @@ func setup(t *testing.T) *repository.JobRepository {
"jwts": {
"max-age": "2m"
},
"apiAllowedIPs": [
"*"
],
"clusters": [
{
"name": "testcluster",

View File

@ -224,8 +224,34 @@ func LoadAverages(
return nil
}
// Used for polar plots in frontend
func LoadStatData(
// Used for statsTable in frontend: Return scoped statistics by metric.
func LoadScopedJobStats(
job *schema.Job,
metrics []string,
scopes []schema.MetricScope,
ctx context.Context,
) (schema.ScopedJobStats, error) {
if job.State != schema.JobStateRunning && !config.Keys.DisableArchive {
return archive.LoadScopedStatsFromArchive(job, metrics, scopes)
}
repo, err := metricdata.GetMetricDataRepo(job.Cluster)
if err != nil {
return nil, fmt.Errorf("job %d: no metric data repository configured for '%s'", job.JobID, job.Cluster)
}
scopedStats, err := repo.LoadScopedStats(job, metrics, scopes, ctx)
if err != nil {
log.Errorf("error while loading scoped statistics for job %d (User %s, Project %s)", job.JobID, job.User, job.Project)
return nil, err
}
return scopedStats, nil
}
// Used for polar plots in frontend: Aggregates statistics for all nodes to single values for job per metric.
func LoadJobStats(
job *schema.Job,
metrics []string,
ctx context.Context,
@ -237,12 +263,12 @@ func LoadStatData(
data := make(map[string]schema.MetricStatistics, len(metrics))
repo, err := metricdata.GetMetricDataRepo(job.Cluster)
if err != nil {
return data, fmt.Errorf("METRICDATA/METRICDATA > no metric data repository configured for '%s'", job.Cluster)
return data, fmt.Errorf("job %d: no metric data repository configured for '%s'", job.JobID, job.Cluster)
}
stats, err := repo.LoadStats(job, metrics, ctx)
if err != nil {
log.Errorf("Error while loading statistics for job %v (User %v, Project %v)", job.JobID, job.User, job.Project)
log.Errorf("error while loading statistics for job %d (User %s, Project %s)", job.JobID, job.User, job.Project)
return data, err
}

View File

@ -129,13 +129,13 @@ func (ccms *CCMetricStore) doRequest(
) (*ApiQueryResponse, error) {
buf := &bytes.Buffer{}
if err := json.NewEncoder(buf).Encode(body); err != nil {
log.Warn("Error while encoding request body")
log.Errorf("Error while encoding request body: %s", err.Error())
return nil, err
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, ccms.queryEndpoint, buf)
if err != nil {
log.Warn("Error while building request body")
log.Errorf("Error while building request body: %s", err.Error())
return nil, err
}
if ccms.jwt != "" {
@ -151,7 +151,7 @@ func (ccms *CCMetricStore) doRequest(
res, err := ccms.client.Do(req)
if err != nil {
log.Error("Error while performing request")
log.Errorf("Error while performing request: %s", err.Error())
return nil, err
}
@ -161,7 +161,7 @@ func (ccms *CCMetricStore) doRequest(
var resBody ApiQueryResponse
if err := json.NewDecoder(bufio.NewReader(res.Body)).Decode(&resBody); err != nil {
log.Warn("Error while decoding result body")
log.Errorf("Error while decoding result body: %s", err.Error())
return nil, err
}
@ -177,7 +177,7 @@ func (ccms *CCMetricStore) LoadData(
) (schema.JobData, error) {
queries, assignedScope, err := ccms.buildQueries(job, metrics, scopes, resolution)
if err != nil {
log.Warn("Error while building queries")
log.Errorf("Error while building queries for jobId %d, Metrics %v, Scopes %v: %s", job.JobID, metrics, scopes, err.Error())
return nil, err
}
@ -192,7 +192,7 @@ func (ccms *CCMetricStore) LoadData(
resBody, err := ccms.doRequest(ctx, &req)
if err != nil {
log.Error("Error while performing request")
log.Errorf("Error while performing request: %s", err.Error())
return nil, err
}
@ -302,6 +302,20 @@ func (ccms *CCMetricStore) buildQueries(
continue
}
// Skip if metric is removed for subcluster
if len(mc.SubClusters) != 0 {
isRemoved := false
for _, scConfig := range mc.SubClusters {
if scConfig.Name == job.SubCluster && scConfig.Remove == true {
isRemoved = true
break
}
}
if isRemoved {
continue
}
}
// Avoid duplicates...
handledScopes := make([]schema.MetricScope, 0, 3)
@ -557,16 +571,9 @@ func (ccms *CCMetricStore) LoadStats(
ctx context.Context,
) (map[string]map[string]schema.MetricStatistics, error) {
// metricConfigs := archive.GetCluster(job.Cluster).MetricConfig
// resolution := 9000
// for _, mc := range metricConfigs {
// resolution = min(resolution, mc.Timestep)
// }
queries, _, err := ccms.buildQueries(job, metrics, []schema.MetricScope{schema.MetricScopeNode}, 0) // #166 Add scope shere for analysis view accelerator normalization?
if err != nil {
log.Warn("Error while building query")
log.Errorf("Error while building queries for jobId %d, Metrics %v: %s", job.JobID, metrics, err.Error())
return nil, err
}
@ -581,7 +588,7 @@ func (ccms *CCMetricStore) LoadStats(
resBody, err := ccms.doRequest(ctx, &req)
if err != nil {
log.Error("Error while performing request")
log.Errorf("Error while performing request: %s", err.Error())
return nil, err
}
@ -591,9 +598,8 @@ func (ccms *CCMetricStore) LoadStats(
metric := ccms.toLocalName(query.Metric)
data := res[0]
if data.Error != nil {
log.Infof("fetching %s for node %s failed: %s", metric, query.Hostname, *data.Error)
log.Errorf("fetching %s for node %s failed: %s", metric, query.Hostname, *data.Error)
continue
// return nil, fmt.Errorf("METRICDATA/CCMS > fetching %s for node %s failed: %s", metric, query.Hostname, *data.Error)
}
metricdata, ok := stats[metric]
@ -603,9 +609,8 @@ func (ccms *CCMetricStore) LoadStats(
}
if data.Avg.IsNaN() || data.Min.IsNaN() || data.Max.IsNaN() {
log.Infof("fetching %s for node %s failed: one of avg/min/max is NaN", metric, query.Hostname)
log.Warnf("fetching %s for node %s failed: one of avg/min/max is NaN", metric, query.Hostname)
continue
// return nil, fmt.Errorf("METRICDATA/CCMS > fetching %s for node %s failed: %s", metric, query.Hostname, "avg/min/max is NaN")
}
metricdata[query.Hostname] = schema.MetricStatistics{
@ -618,7 +623,98 @@ func (ccms *CCMetricStore) LoadStats(
return stats, nil
}
// TODO: Support sub-node-scope metrics! For this, the partition of a node needs to be known!
// Used for Job-View Statistics Table
func (ccms *CCMetricStore) LoadScopedStats(
job *schema.Job,
metrics []string,
scopes []schema.MetricScope,
ctx context.Context,
) (schema.ScopedJobStats, error) {
queries, assignedScope, err := ccms.buildQueries(job, metrics, scopes, 0)
if err != nil {
log.Errorf("Error while building queries for jobId %d, Metrics %v, Scopes %v: %s", job.JobID, metrics, scopes, err.Error())
return nil, err
}
req := ApiQueryRequest{
Cluster: job.Cluster,
From: job.StartTime.Unix(),
To: job.StartTime.Add(time.Duration(job.Duration) * time.Second).Unix(),
Queries: queries,
WithStats: true,
WithData: false,
}
resBody, err := ccms.doRequest(ctx, &req)
if err != nil {
log.Errorf("Error while performing request: %s", err.Error())
return nil, err
}
var errors []string
scopedJobStats := make(schema.ScopedJobStats)
for i, row := range resBody.Results {
query := req.Queries[i]
metric := ccms.toLocalName(query.Metric)
scope := assignedScope[i]
if _, ok := scopedJobStats[metric]; !ok {
scopedJobStats[metric] = make(map[schema.MetricScope][]*schema.ScopedStats)
}
if _, ok := scopedJobStats[metric][scope]; !ok {
scopedJobStats[metric][scope] = make([]*schema.ScopedStats, 0)
}
for ndx, res := range row {
if res.Error != nil {
/* Build list for "partial errors", if any */
errors = append(errors, fmt.Sprintf("failed to fetch '%s' from host '%s': %s", query.Metric, query.Hostname, *res.Error))
continue
}
id := (*string)(nil)
if query.Type != nil {
id = new(string)
*id = query.TypeIds[ndx]
}
if res.Avg.IsNaN() || res.Min.IsNaN() || res.Max.IsNaN() {
// "schema.Float()" because regular float64 can not be JSONed when NaN.
res.Avg = schema.Float(0)
res.Min = schema.Float(0)
res.Max = schema.Float(0)
}
scopedJobStats[metric][scope] = append(scopedJobStats[metric][scope], &schema.ScopedStats{
Hostname: query.Hostname,
Id: id,
Data: &schema.MetricStatistics{
Avg: float64(res.Avg),
Min: float64(res.Min),
Max: float64(res.Max),
},
})
}
// So that one can later check len(scopedJobStats[metric][scope]): Remove from map if empty
if len(scopedJobStats[metric][scope]) == 0 {
delete(scopedJobStats[metric], scope)
if len(scopedJobStats[metric]) == 0 {
delete(scopedJobStats, metric)
}
}
}
if len(errors) != 0 {
/* Returns list for "partial errors" */
return scopedJobStats, fmt.Errorf("METRICDATA/CCMS > Errors: %s", strings.Join(errors, ", "))
}
return scopedJobStats, nil
}
// Used for Systems-View Node-Overview
func (ccms *CCMetricStore) LoadNodeData(
cluster string,
metrics, nodes []string,
@ -652,7 +748,7 @@ func (ccms *CCMetricStore) LoadNodeData(
resBody, err := ccms.doRequest(ctx, &req)
if err != nil {
log.Error(fmt.Sprintf("Error while performing request %#v\n", err))
log.Errorf("Error while performing request: %s", err.Error())
return nil, err
}
@ -710,6 +806,7 @@ func (ccms *CCMetricStore) LoadNodeData(
return data, nil
}
// Used for Systems-View Node-List
func (ccms *CCMetricStore) LoadNodeListData(
cluster, subCluster, nodeFilter string,
metrics []string,
@ -768,7 +865,7 @@ func (ccms *CCMetricStore) LoadNodeListData(
queries, assignedScope, err := ccms.buildNodeQueries(cluster, subCluster, nodes, metrics, scopes, resolution)
if err != nil {
log.Warn("Error while building queries")
log.Errorf("Error while building node queries for Cluster %s, SubCLuster %s, Metrics %v, Scopes %v: %s", cluster, subCluster, metrics, scopes, err.Error())
return nil, totalNodes, hasNextPage, err
}
@ -783,7 +880,7 @@ func (ccms *CCMetricStore) LoadNodeListData(
resBody, err := ccms.doRequest(ctx, &req)
if err != nil {
log.Error(fmt.Sprintf("Error while performing request %#v\n", err))
log.Errorf("Error while performing request: %s", err.Error())
return nil, totalNodes, hasNextPage, err
}
@ -888,7 +985,7 @@ func (ccms *CCMetricStore) buildNodeQueries(
if subCluster != "" {
subClusterTopol, scterr = archive.GetSubCluster(cluster, subCluster)
if scterr != nil {
// TODO: Log
log.Errorf("could not load cluster %s subCluster %s topology: %s", cluster, subCluster, scterr.Error())
return nil, nil, scterr
}
}
@ -898,10 +995,24 @@ func (ccms *CCMetricStore) buildNodeQueries(
mc := archive.GetMetricConfig(cluster, metric)
if mc == nil {
// return nil, fmt.Errorf("METRICDATA/CCMS > metric '%s' is not specified for cluster '%s'", metric, cluster)
log.Infof("metric '%s' is not specified for cluster '%s'", metric, cluster)
log.Warnf("metric '%s' is not specified for cluster '%s'", metric, cluster)
continue
}
// Skip if metric is removed for subcluster
if mc.SubClusters != nil {
isRemoved := false
for _, scConfig := range mc.SubClusters {
if scConfig.Name == subCluster && scConfig.Remove == true {
isRemoved = true
break
}
}
if isRemoved {
continue
}
}
// Avoid duplicates...
handledScopes := make([]schema.MetricScope, 0, 3)

View File

@ -10,6 +10,8 @@ import (
"encoding/json"
"errors"
"fmt"
"math"
"sort"
"strings"
"time"
@ -64,6 +66,8 @@ func (idb *InfluxDBv2DataRepository) LoadData(
ctx context.Context,
resolution int) (schema.JobData, error) {
log.Infof("InfluxDB 2 Backend: Resolution Scaling not Implemented, will return default timestep. Requested Resolution %d", resolution)
measurementsConds := make([]string, 0, len(metrics))
for _, m := range metrics {
measurementsConds = append(measurementsConds, fmt.Sprintf(`r["_measurement"] == "%s"`, m))
@ -86,7 +90,7 @@ func (idb *InfluxDBv2DataRepository) LoadData(
query := ""
switch scope {
case "node":
// Get Finest Granularity, Groupy By Measurement and Hostname (== Metric / Node), Calculate Mean for 60s windows
// Get Finest Granularity, Groupy By Measurement and Hostname (== Metric / Node), Calculate Mean for 60s windows <-- Resolution could be added here?
// log.Info("Scope 'node' requested. ")
query = fmt.Sprintf(`
from(bucket: "%s")
@ -116,6 +120,12 @@ func (idb *InfluxDBv2DataRepository) LoadData(
// idb.bucket,
// idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix + int64(job.Duration) + int64(1) )),
// measurementsCond, hostsCond)
case "hwthread":
log.Info(" Scope 'hwthread' requested, but not yet supported: Will return 'node' scope only. ")
continue
case "accelerator":
log.Info(" Scope 'accelerator' requested, but not yet supported: Will return 'node' scope only. ")
continue
default:
log.Infof("Unknown scope '%s' requested: Will return 'node' scope.", scope)
continue
@ -173,6 +183,11 @@ func (idb *InfluxDBv2DataRepository) LoadData(
}
case "socket":
continue
case "accelerator":
continue
case "hwthread":
// See below @ core
continue
case "core":
continue
// Include Series.Id in hostSeries
@ -301,6 +316,53 @@ func (idb *InfluxDBv2DataRepository) LoadStats(
return stats, nil
}
// Used in Job-View StatsTable
// UNTESTED
func (idb *InfluxDBv2DataRepository) LoadScopedStats(
job *schema.Job,
metrics []string,
scopes []schema.MetricScope,
ctx context.Context) (schema.ScopedJobStats, error) {
// Assumption: idb.loadData() only returns series node-scope - use node scope for statsTable
scopedJobStats := make(schema.ScopedJobStats)
data, err := idb.LoadData(job, metrics, []schema.MetricScope{schema.MetricScopeNode}, ctx, 0 /*resolution here*/)
if err != nil {
log.Warn("Error while loading job for scopedJobStats")
return nil, err
}
for metric, metricData := range data {
for _, scope := range scopes {
if scope != schema.MetricScopeNode {
logOnce.Do(func() {
log.Infof("Note: Scope '%s' requested, but not yet supported: Will return 'node' scope only.", scope)
})
continue
}
if _, ok := scopedJobStats[metric]; !ok {
scopedJobStats[metric] = make(map[schema.MetricScope][]*schema.ScopedStats)
}
if _, ok := scopedJobStats[metric][scope]; !ok {
scopedJobStats[metric][scope] = make([]*schema.ScopedStats, 0)
}
for _, series := range metricData[scope].Series {
scopedJobStats[metric][scope] = append(scopedJobStats[metric][scope], &schema.ScopedStats{
Hostname: series.Hostname,
Data: &series.Statistics,
})
}
}
}
return scopedJobStats, nil
}
// Used in Systems-View @ Node-Overview
// UNTESTED
func (idb *InfluxDBv2DataRepository) LoadNodeData(
cluster string,
metrics, nodes []string,
@ -308,12 +370,123 @@ func (idb *InfluxDBv2DataRepository) LoadNodeData(
from, to time.Time,
ctx context.Context) (map[string]map[string][]*schema.JobMetric, error) {
// TODO : Implement to be used in Analysis- und System/Node-View
log.Infof("LoadNodeData unimplemented for InfluxDBv2DataRepository, Args: cluster %s, metrics %v, nodes %v, scopes %v", cluster, metrics, nodes, scopes)
// Note: scopes[] Array will be ignored, only return node scope
return nil, errors.New("METRICDATA/INFLUXV2 > unimplemented for InfluxDBv2DataRepository")
// CONVERT ARGS TO INFLUX
measurementsConds := make([]string, 0)
for _, m := range metrics {
measurementsConds = append(measurementsConds, fmt.Sprintf(`r["_measurement"] == "%s"`, m))
}
measurementsCond := strings.Join(measurementsConds, " or ")
hostsConds := make([]string, 0)
if nodes == nil {
var allNodes []string
subClusterNodeLists := archive.NodeLists[cluster]
for _, nodeList := range subClusterNodeLists {
allNodes = append(nodes, nodeList.PrintList()...)
}
for _, node := range allNodes {
nodes = append(nodes, node)
hostsConds = append(hostsConds, fmt.Sprintf(`r["hostname"] == "%s"`, node))
}
} else {
for _, node := range nodes {
hostsConds = append(hostsConds, fmt.Sprintf(`r["hostname"] == "%s"`, node))
}
}
hostsCond := strings.Join(hostsConds, " or ")
// BUILD AND PERFORM QUERY
query := fmt.Sprintf(`
from(bucket: "%s")
|> range(start: %s, stop: %s)
|> filter(fn: (r) => (%s) and (%s) )
|> drop(columns: ["_start", "_stop"])
|> group(columns: ["hostname", "_measurement"])
|> aggregateWindow(every: 60s, fn: mean)
|> drop(columns: ["_time"])`,
idb.bucket,
idb.formatTime(from), idb.formatTime(to),
measurementsCond, hostsCond)
rows, err := idb.queryClient.Query(ctx, query)
if err != nil {
log.Error("Error while performing query")
return nil, err
}
// HANDLE QUERY RETURN
// Collect Float Arrays for Node@Metric -> No Scope Handling!
influxData := make(map[string]map[string][]schema.Float)
for rows.Next() {
row := rows.Record()
host, field := row.ValueByKey("hostname").(string), row.Measurement()
influxHostData, ok := influxData[host]
if !ok {
influxHostData = make(map[string][]schema.Float)
influxData[host] = influxHostData
}
influxFieldData, ok := influxData[host][field]
if !ok {
influxFieldData = make([]schema.Float, 0)
influxData[host][field] = influxFieldData
}
val, ok := row.Value().(float64)
if ok {
influxData[host][field] = append(influxData[host][field], schema.Float(val))
} else {
influxData[host][field] = append(influxData[host][field], schema.Float(0))
}
}
// BUILD FUNCTION RETURN
data := make(map[string]map[string][]*schema.JobMetric)
for node, metricData := range influxData {
nodeData, ok := data[node]
if !ok {
nodeData = make(map[string][]*schema.JobMetric)
data[node] = nodeData
}
for metric, floatArray := range metricData {
avg, min, max := 0.0, 0.0, 0.0
for _, val := range floatArray {
avg += float64(val)
min = math.Min(min, float64(val))
max = math.Max(max, float64(val))
}
stats := schema.MetricStatistics{
Avg: (math.Round((avg/float64(len(floatArray)))*100) / 100),
Min: (math.Round(min*100) / 100),
Max: (math.Round(max*100) / 100),
}
mc := archive.GetMetricConfig(cluster, metric)
nodeData[metric] = append(nodeData[metric], &schema.JobMetric{
Unit: mc.Unit,
Timestep: mc.Timestep,
Series: []schema.Series{
{
Hostname: node,
Statistics: stats,
Data: floatArray,
},
},
})
}
}
return data, nil
}
// Used in Systems-View @ Node-List
// UNTESTED
func (idb *InfluxDBv2DataRepository) LoadNodeListData(
cluster, subCluster, nodeFilter string,
metrics []string,
@ -324,10 +497,79 @@ func (idb *InfluxDBv2DataRepository) LoadNodeListData(
ctx context.Context,
) (map[string]schema.JobData, int, bool, error) {
// Assumption: idb.loadData() only returns series node-scope - use node scope for NodeList
// 0) Init additional vars
var totalNodes int = 0
var hasNextPage bool = false
// TODO : Implement to be used in NodeList-View
log.Infof("LoadNodeListData unimplemented for InfluxDBv2DataRepository, Args: cluster %s, metrics %v, nodeFilter %v, scopes %v", cluster, metrics, nodeFilter, scopes)
return nil, totalNodes, hasNextPage, errors.New("METRICDATA/INFLUXV2 > unimplemented for InfluxDBv2DataRepository")
// 1) Get list of all nodes
var nodes []string
if subCluster != "" {
scNodes := archive.NodeLists[cluster][subCluster]
nodes = scNodes.PrintList()
} else {
subClusterNodeLists := archive.NodeLists[cluster]
for _, nodeList := range subClusterNodeLists {
nodes = append(nodes, nodeList.PrintList()...)
}
}
// 2) Filter nodes
if nodeFilter != "" {
filteredNodes := []string{}
for _, node := range nodes {
if strings.Contains(node, nodeFilter) {
filteredNodes = append(filteredNodes, node)
}
}
nodes = filteredNodes
}
// 2.1) Count total nodes && Sort nodes -> Sorting invalidated after return ...
totalNodes = len(nodes)
sort.Strings(nodes)
// 3) Apply paging
if len(nodes) > page.ItemsPerPage {
start := (page.Page - 1) * page.ItemsPerPage
end := start + page.ItemsPerPage
if end > len(nodes) {
end = len(nodes)
hasNextPage = false
} else {
hasNextPage = true
}
nodes = nodes[start:end]
}
// 4) Fetch And Convert Data, use idb.LoadNodeData() for query
rawNodeData, err := idb.LoadNodeData(cluster, metrics, nodes, scopes, from, to, ctx)
if err != nil {
log.Error(fmt.Sprintf("Error while loading influx nodeData for nodeListData %#v\n", err))
return nil, totalNodes, hasNextPage, err
}
data := make(map[string]schema.JobData)
for node, nodeData := range rawNodeData {
// Init Nested Map Data Structures If Not Found
hostData, ok := data[node]
if !ok {
hostData = make(schema.JobData)
data[node] = hostData
}
for metric, nodeMetricData := range nodeData {
metricData, ok := hostData[metric]
if !ok {
metricData = make(map[schema.MetricScope]*schema.JobMetric)
data[node][metric] = metricData
}
data[node][metric][schema.MetricScopeNode] = nodeMetricData[0] // Only Node Scope Returned from loadNodeData
}
}
return data, totalNodes, hasNextPage, nil
}

View File

@ -24,9 +24,12 @@ type MetricDataRepository interface {
// Return the JobData for the given job, only with the requested metrics.
LoadData(job *schema.Job, metrics []string, scopes []schema.MetricScope, ctx context.Context, resolution int) (schema.JobData, error)
// Return a map of metrics to a map of nodes to the metric statistics of the job. node scope assumed for now.
// Return a map of metrics to a map of nodes to the metric statistics of the job. node scope only.
LoadStats(job *schema.Job, metrics []string, ctx context.Context) (map[string]map[string]schema.MetricStatistics, error)
// Return a map of metrics to a map of scopes to the scoped metric statistics of the job.
LoadScopedStats(job *schema.Job, metrics []string, scopes []schema.MetricScope, ctx context.Context) (schema.ScopedJobStats, error)
// Return a map of hosts to a map of metrics at the requested scopes (currently only node) for that node.
LoadNodeData(cluster string, metrics, nodes []string, scopes []schema.MetricScope, from, to time.Time, ctx context.Context) (map[string]map[string][]*schema.JobMetric, error)

View File

@ -448,6 +448,51 @@ func (pdb *PrometheusDataRepository) LoadNodeData(
return data, nil
}
// Implemented by NHR@FAU; Used in Job-View StatsTable
func (pdb *PrometheusDataRepository) LoadScopedStats(
job *schema.Job,
metrics []string,
scopes []schema.MetricScope,
ctx context.Context) (schema.ScopedJobStats, error) {
// Assumption: pdb.loadData() only returns series node-scope - use node scope for statsTable
scopedJobStats := make(schema.ScopedJobStats)
data, err := pdb.LoadData(job, metrics, []schema.MetricScope{schema.MetricScopeNode}, ctx, 0 /*resolution here*/)
if err != nil {
log.Warn("Error while loading job for scopedJobStats")
return nil, err
}
for metric, metricData := range data {
for _, scope := range scopes {
if scope != schema.MetricScopeNode {
logOnce.Do(func() {
log.Infof("Note: Scope '%s' requested, but not yet supported: Will return 'node' scope only.", scope)
})
continue
}
if _, ok := scopedJobStats[metric]; !ok {
scopedJobStats[metric] = make(map[schema.MetricScope][]*schema.ScopedStats)
}
if _, ok := scopedJobStats[metric][scope]; !ok {
scopedJobStats[metric][scope] = make([]*schema.ScopedStats, 0)
}
for _, series := range metricData[scope].Series {
scopedJobStats[metric][scope] = append(scopedJobStats[metric][scope], &schema.ScopedStats{
Hostname: series.Hostname,
Data: &series.Statistics,
})
}
}
}
return scopedJobStats, nil
}
// Implemented by NHR@FAU; Used in NodeList-View
func (pdb *PrometheusDataRepository) LoadNodeListData(
cluster, subCluster, nodeFilter string,
metrics []string,
@ -458,10 +503,132 @@ func (pdb *PrometheusDataRepository) LoadNodeListData(
ctx context.Context,
) (map[string]schema.JobData, int, bool, error) {
// Assumption: pdb.loadData() only returns series node-scope - use node scope for NodeList
// 0) Init additional vars
var totalNodes int = 0
var hasNextPage bool = false
// TODO : Implement to be used in NodeList-View
log.Infof("LoadNodeListData unimplemented for PrometheusDataRepository, Args: cluster %s, metrics %v, nodeFilter %v, scopes %v", cluster, metrics, nodeFilter, scopes)
return nil, totalNodes, hasNextPage, errors.New("METRICDATA/INFLUXV2 > unimplemented for PrometheusDataRepository")
// 1) Get list of all nodes
var nodes []string
if subCluster != "" {
scNodes := archive.NodeLists[cluster][subCluster]
nodes = scNodes.PrintList()
} else {
subClusterNodeLists := archive.NodeLists[cluster]
for _, nodeList := range subClusterNodeLists {
nodes = append(nodes, nodeList.PrintList()...)
}
}
// 2) Filter nodes
if nodeFilter != "" {
filteredNodes := []string{}
for _, node := range nodes {
if strings.Contains(node, nodeFilter) {
filteredNodes = append(filteredNodes, node)
}
}
nodes = filteredNodes
}
// 2.1) Count total nodes && Sort nodes -> Sorting invalidated after return ...
totalNodes = len(nodes)
sort.Strings(nodes)
// 3) Apply paging
if len(nodes) > page.ItemsPerPage {
start := (page.Page - 1) * page.ItemsPerPage
end := start + page.ItemsPerPage
if end > len(nodes) {
end = len(nodes)
hasNextPage = false
} else {
hasNextPage = true
}
nodes = nodes[start:end]
}
// 4) Fetch Data, based on pdb.LoadNodeData()
t0 := time.Now()
// Map of hosts of jobData
data := make(map[string]schema.JobData)
// query db for each metric
// TODO: scopes seems to be always empty
if len(scopes) == 0 || !contains(scopes, schema.MetricScopeNode) {
scopes = append(scopes, schema.MetricScopeNode)
}
for _, scope := range scopes {
if scope != schema.MetricScopeNode {
logOnce.Do(func() {
log.Infof("Note: Scope '%s' requested, but not yet supported: Will return 'node' scope only.", scope)
})
continue
}
for _, metric := range metrics {
metricConfig := archive.GetMetricConfig(cluster, metric)
if metricConfig == nil {
log.Warnf("Error in LoadNodeListData: Metric %s for cluster %s not configured", metric, cluster)
return nil, totalNodes, hasNextPage, errors.New("Prometheus config error")
}
query, err := pdb.FormatQuery(metric, scope, nodes, cluster)
if err != nil {
log.Warn("Error while formatting prometheus query")
return nil, totalNodes, hasNextPage, err
}
// ranged query over all nodes
r := promv1.Range{
Start: from,
End: to,
Step: time.Duration(metricConfig.Timestep * 1e9),
}
result, warnings, err := pdb.queryClient.QueryRange(ctx, query, r)
if err != nil {
log.Errorf("Prometheus query error in LoadNodeData: %v\n", err)
return nil, totalNodes, hasNextPage, errors.New("Prometheus query error")
}
if len(warnings) > 0 {
log.Warnf("Warnings: %v\n", warnings)
}
step := int64(metricConfig.Timestep)
steps := int64(to.Sub(from).Seconds()) / step
// iter rows of host, metric, values
for _, row := range result.(promm.Matrix) {
hostname := strings.TrimSuffix(string(row.Metric["exported_instance"]), pdb.suffix)
hostdata, ok := data[hostname]
if !ok {
hostdata = make(schema.JobData)
data[hostname] = hostdata
}
metricdata, ok := hostdata[metric]
if !ok {
metricdata = make(map[schema.MetricScope]*schema.JobMetric)
data[hostname][metric] = metricdata
}
// output per host, metric and scope
scopeData, ok := metricdata[scope]
if !ok {
scopeData = &schema.JobMetric{
Unit: metricConfig.Unit,
Timestep: metricConfig.Timestep,
Series: []schema.Series{pdb.RowToSeries(from, step, steps, row)},
}
data[hostname][metric][scope] = scopeData
}
}
}
}
t1 := time.Since(t0)
log.Debugf("LoadNodeListData of %v nodes took %s", len(data), t1)
return data, totalNodes, hasNextPage, nil
}

View File

@ -36,7 +36,17 @@ func (tmdr *TestMetricDataRepository) LoadData(
func (tmdr *TestMetricDataRepository) LoadStats(
job *schema.Job,
metrics []string, ctx context.Context) (map[string]map[string]schema.MetricStatistics, error) {
metrics []string,
ctx context.Context) (map[string]map[string]schema.MetricStatistics, error) {
panic("TODO")
}
func (tmdr *TestMetricDataRepository) LoadScopedStats(
job *schema.Job,
metrics []string,
scopes []schema.MetricScope,
ctx context.Context) (schema.ScopedJobStats, error) {
panic("TODO")
}

View File

@ -59,17 +59,15 @@ func Connect(driver string, db string) {
} else {
dbHandle, err = sqlx.Open("sqlite3", opts.URL)
}
if err != nil {
log.Fatal(err)
}
case "mysql":
opts.URL += "?multiStatements=true"
dbHandle, err = sqlx.Open("mysql", opts.URL)
if err != nil {
log.Fatalf("sqlx.Open() error: %v", err)
}
default:
log.Fatalf("unsupported database driver: %s", driver)
log.Abortf("DB Connection: Unsupported database driver '%s'.\n", driver)
}
if err != nil {
log.Abortf("DB Connection: Could not connect to '%s' database with sqlx.Open().\nError: %s\n", driver, err.Error())
}
dbHandle.SetMaxOpenConns(opts.MaxOpenConnections)
@ -80,7 +78,7 @@ func Connect(driver string, db string) {
dbConnInstance = &DBConnection{DB: dbHandle, Driver: driver}
err = checkDBVersion(driver, dbHandle.DB)
if err != nil {
log.Fatal(err)
log.Abortf("DB Connection: Failed DB version check.\nError: %s\n", err.Error())
}
})
}

View File

@ -194,11 +194,13 @@ func (r *JobRepository) FindConcurrentJobs(
queryRunning := query.Where("job.job_state = ?").Where("(job.start_time BETWEEN ? AND ? OR job.start_time < ?)",
"running", startTimeTail, stopTimeTail, startTime)
queryRunning = queryRunning.Where("job.resources LIKE ?", fmt.Sprint("%", hostname, "%"))
// Get At Least One Exact Hostname Match from JSON Resources Array in Database
queryRunning = queryRunning.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, '$.hostname') = ?)", hostname)
query = query.Where("job.job_state != ?").Where("((job.start_time BETWEEN ? AND ?) OR (job.start_time + job.duration) BETWEEN ? AND ? OR (job.start_time < ?) AND (job.start_time + job.duration) > ?)",
"running", startTimeTail, stopTimeTail, startTimeFront, stopTimeTail, startTime, stopTime)
query = query.Where("job.resources LIKE ?", fmt.Sprint("%", hostname, "%"))
// Get At Least One Exact Hostname Match from JSON Resources Array in Database
query = query.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, '$.hostname') = ?)", hostname)
rows, err := query.RunWith(r.stmtCache).Query()
if err != nil {

View File

@ -67,7 +67,8 @@ func (r *JobRepository) QueryJobs(
rows, err := query.RunWith(r.stmtCache).Query()
if err != nil {
log.Errorf("Error while running query: %v", err)
queryString, queryVars, _ := query.ToSql()
log.Errorf("Error while running query '%s' %v: %v", queryString, queryVars, err)
return nil, err
}
@ -197,7 +198,7 @@ func BuildWhereClause(filter *model.JobFilter, query sq.SelectBuilder) sq.Select
query = buildIntCondition("job.num_hwthreads", filter.NumHWThreads, query)
}
if filter.Node != nil {
query = buildStringCondition("job.resources", filter.Node, query)
query = buildResourceJsonCondition("hostname", filter.Node, query)
}
if filter.Energy != nil {
query = buildFloatCondition("job.energy", filter.Energy, query)
@ -299,6 +300,28 @@ func buildMetaJsonCondition(jsonField string, cond *model.StringInput, query sq.
return query
}
func buildResourceJsonCondition(jsonField string, cond *model.StringInput, query sq.SelectBuilder) sq.SelectBuilder {
// Verify and Search Only in Valid Jsons
query = query.Where("JSON_VALID(resources)")
// add "AND" Sql query Block for field match
if cond.Eq != nil {
return query.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, \"$."+jsonField+"\") = ?)", *cond.Eq)
}
if cond.Neq != nil { // Currently Unused
return query.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, \"$."+jsonField+"\") != ?)", *cond.Neq)
}
if cond.StartsWith != nil { // Currently Unused
return query.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, \"$."+jsonField+"\")) LIKE ?)", fmt.Sprint(*cond.StartsWith, "%"))
}
if cond.EndsWith != nil { // Currently Unused
return query.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, \"$."+jsonField+"\") LIKE ?)", fmt.Sprint("%", *cond.EndsWith))
}
if cond.Contains != nil {
return query.Where("EXISTS (SELECT 1 FROM json_each(job.resources) WHERE json_extract(value, \"$."+jsonField+"\") LIKE ?)", fmt.Sprint("%", *cond.Contains, "%"))
}
return query
}
var (
matchFirstCap = regexp.MustCompile("(.)([A-Z][a-z]+)")
matchAllCap = regexp.MustCompile("([a-z0-9])([A-Z])")

View File

@ -54,7 +54,7 @@ func checkDBVersion(backend string, db *sql.DB) error {
return err
}
default:
log.Fatalf("unsupported database backend: %s", backend)
log.Abortf("Migration: Unsupported database backend '%s'.\n", backend)
}
v, dirty, err := m.Version()
@ -102,7 +102,7 @@ func getMigrateInstance(backend string, db string) (m *migrate.Migrate, err erro
return m, err
}
default:
log.Fatalf("unsupported database backend: %s", backend)
log.Abortf("Migration: Unsupported database backend '%s'.\n", backend)
}
return m, nil

View File

@ -674,57 +674,32 @@ func (r *JobRepository) jobsMetricStatisticsHistogram(
}
}
// log.Debugf("Metric %s, Peak %f, Unit %s, Aggregation %s", metric, peak, unit, aggreg)
// Make bins, see https://jereze.com/code/sql-histogram/
// log.Debugf("Metric %s, Peak %f, Unit %s", metric, peak, unit)
// Make bins, see https://jereze.com/code/sql-histogram/ (Modified here)
start := time.Now()
jm := fmt.Sprintf(`json_extract(footprint, "$.%s")`, (metric + "_" + footprintStat))
crossJoinQuery := sq.Select(
fmt.Sprintf(`max(%s) as max`, jm),
fmt.Sprintf(`min(%s) as min`, jm),
).From("job").Where(
"JSON_VALID(footprint)",
).Where(
fmt.Sprintf(`%s is not null`, jm),
).Where(
fmt.Sprintf(`%s <= %f`, jm, peak),
)
crossJoinQuery, cjqerr := SecurityCheck(ctx, crossJoinQuery)
if cjqerr != nil {
return nil, cjqerr
}
for _, f := range filters {
crossJoinQuery = BuildWhereClause(f, crossJoinQuery)
}
crossJoinQuerySql, crossJoinQueryArgs, sqlerr := crossJoinQuery.ToSql()
if sqlerr != nil {
return nil, sqlerr
}
binQuery := fmt.Sprintf(`CAST( (case when %s = value.max
then value.max*0.999999999 else %s end - value.min) / (value.max -
value.min) * %v as INTEGER )`, jm, jm, *bins)
// Find Jobs' Value Bin Number: Divide Value by Peak, Multiply by RequestedBins, then CAST to INT: Gets Bin-Number of Job
binQuery := fmt.Sprintf(`CAST(
((case when json_extract(footprint, "$.%s") = %f then %f*0.999999999 else json_extract(footprint, "$.%s") end) / %f)
* %v as INTEGER )`,
(metric + "_" + footprintStat), peak, peak, (metric + "_" + footprintStat), peak, *bins)
mainQuery := sq.Select(
fmt.Sprintf(`%s + 1 as bin`, binQuery),
fmt.Sprintf(`count(%s) as count`, jm),
fmt.Sprintf(`CAST(((value.max / %d) * (%v )) as INTEGER ) as min`, *bins, binQuery),
fmt.Sprintf(`CAST(((value.max / %d) * (%v + 1 )) as INTEGER ) as max`, *bins, binQuery),
).From("job").CrossJoin(
fmt.Sprintf(`(%s) as value`, crossJoinQuerySql), crossJoinQueryArgs...,
).Where(fmt.Sprintf(`%s is not null and %s <= %f`, jm, jm, peak))
fmt.Sprintf(`count(*) as count`),
// For Debug: // fmt.Sprintf(`CAST((%f / %d) as INTEGER ) * %s as min`, peak, *bins, binQuery),
// For Debug: // fmt.Sprintf(`CAST((%f / %d) as INTEGER ) * (%s + 1) as max`, peak, *bins, binQuery),
).From("job").Where(
"JSON_VALID(footprint)",
).Where(fmt.Sprintf(`json_extract(footprint, "$.%s") is not null and json_extract(footprint, "$.%s") <= %f`, (metric + "_" + footprintStat), (metric + "_" + footprintStat), peak))
// Only accessible Jobs...
mainQuery, qerr := SecurityCheck(ctx, mainQuery)
if qerr != nil {
return nil, qerr
}
// Filters...
for _, f := range filters {
mainQuery = BuildWhereClause(f, mainQuery)
}
@ -738,32 +713,34 @@ func (r *JobRepository) jobsMetricStatisticsHistogram(
return nil, err
}
// Setup Array
// Setup Return Array With Bin-Numbers for Match and Min/Max based on Peak
points := make([]*model.MetricHistoPoint, 0)
binStep := int(peak) / *bins
for i := 1; i <= *bins; i++ {
binMax := ((int(peak) / *bins) * i)
binMin := ((int(peak) / *bins) * (i - 1))
point := model.MetricHistoPoint{Bin: &i, Count: 0, Min: &binMin, Max: &binMax}
points = append(points, &point)
binMin := (binStep * (i - 1))
binMax := (binStep * i)
epoint := model.MetricHistoPoint{Bin: &i, Count: 0, Min: &binMin, Max: &binMax}
points = append(points, &epoint)
}
for rows.Next() {
point := model.MetricHistoPoint{}
if err := rows.Scan(&point.Bin, &point.Count, &point.Min, &point.Max); err != nil {
log.Warnf("Error while scanning rows for %s", jm)
return nil, err // Totally bricks cc-backend if returned and if all metrics requested?
for rows.Next() { // Fill Count if Bin-No. Matches (Not every Bin exists in DB!)
rpoint := model.MetricHistoPoint{}
if err := rows.Scan(&rpoint.Bin, &rpoint.Count); err != nil { // Required for Debug: &rpoint.Min, &rpoint.Max
log.Warnf("Error while scanning rows for %s", metric)
return nil, err // FIXME: Totally bricks cc-backend if returned and if all metrics requested?
}
for _, e := range points {
if e.Bin != nil && point.Bin != nil {
if *e.Bin == *point.Bin {
e.Count = point.Count
if point.Min != nil {
e.Min = point.Min
}
if point.Max != nil {
e.Max = point.Max
}
if e.Bin != nil && rpoint.Bin != nil {
if *e.Bin == *rpoint.Bin {
e.Count = rpoint.Count
// Only Required For Debug: Check DB returned Min/Max against Backend Init above
// if rpoint.Min != nil {
// log.Warnf(">>>> Bin %d Min Set For %s to %d (Init'd with: %d)", *e.Bin, metric, *rpoint.Min, *e.Min)
// }
// if rpoint.Max != nil {
// log.Warnf(">>>> Bin %d Max Set For %s to %d (Init'd with: %d)", *e.Bin, metric, *rpoint.Max, *e.Max)
// }
break
}
}

View File

@ -45,7 +45,7 @@ func (r *JobRepository) AddTag(user *schema.User, job int64, tag int64) ([]*sche
return tags, archive.UpdateTags(j, archiveTags)
}
// Removes a tag from a job
// Removes a tag from a job by tag id
func (r *JobRepository) RemoveTag(user *schema.User, job, tag int64) ([]*schema.Tag, error) {
j, err := r.FindByIdWithUser(user, job)
if err != nil {
@ -76,6 +76,99 @@ func (r *JobRepository) RemoveTag(user *schema.User, job, tag int64) ([]*schema.
return tags, archive.UpdateTags(j, archiveTags)
}
// Removes a tag from a job by tag info
func (r *JobRepository) RemoveJobTagByRequest(user *schema.User, job int64, tagType string, tagName string, tagScope string) ([]*schema.Tag, error) {
// Get Tag ID to delete
tagID, exists := r.TagId(tagType, tagName, tagScope)
if !exists {
log.Warnf("Tag does not exist (name, type, scope): %s, %s, %s", tagName, tagType, tagScope)
return nil, fmt.Errorf("Tag does not exist (name, type, scope): %s, %s, %s", tagName, tagType, tagScope)
}
// Get Job
j, err := r.FindByIdWithUser(user, job)
if err != nil {
log.Warn("Error while finding job by id")
return nil, err
}
// Handle Delete
q := sq.Delete("jobtag").Where("jobtag.job_id = ?", job).Where("jobtag.tag_id = ?", tagID)
if _, err := q.RunWith(r.stmtCache).Exec(); err != nil {
s, _, _ := q.ToSql()
log.Errorf("Error removing tag from table 'jobTag' with %s: %v", s, err)
return nil, err
}
tags, err := r.GetTags(user, &job)
if err != nil {
log.Warn("Error while getting tags for job")
return nil, err
}
archiveTags, err := r.getArchiveTags(&job)
if err != nil {
log.Warn("Error while getting tags for job")
return nil, err
}
return tags, archive.UpdateTags(j, archiveTags)
}
// Removes a tag from db by tag info
func (r *JobRepository) RemoveTagByRequest(tagType string, tagName string, tagScope string) error {
// Get Tag ID to delete
tagID, exists := r.TagId(tagType, tagName, tagScope)
if !exists {
log.Warnf("Tag does not exist (name, type, scope): %s, %s, %s", tagName, tagType, tagScope)
return fmt.Errorf("Tag does not exist (name, type, scope): %s, %s, %s", tagName, tagType, tagScope)
}
// Handle Delete JobTagTable
qJobTag := sq.Delete("jobtag").Where("jobtag.tag_id = ?", tagID)
if _, err := qJobTag.RunWith(r.stmtCache).Exec(); err != nil {
s, _, _ := qJobTag.ToSql()
log.Errorf("Error removing tag from table 'jobTag' with %s: %v", s, err)
return err
}
// Handle Delete TagTable
qTag := sq.Delete("tag").Where("tag.id = ?", tagID)
if _, err := qTag.RunWith(r.stmtCache).Exec(); err != nil {
s, _, _ := qTag.ToSql()
log.Errorf("Error removing tag from table 'tag' with %s: %v", s, err)
return err
}
return nil
}
// Removes a tag from db by tag id
func (r *JobRepository) RemoveTagById(tagID int64) error {
// Handle Delete JobTagTable
qJobTag := sq.Delete("jobtag").Where("jobtag.tag_id = ?", tagID)
if _, err := qJobTag.RunWith(r.stmtCache).Exec(); err != nil {
s, _, _ := qJobTag.ToSql()
log.Errorf("Error removing tag from table 'jobTag' with %s: %v", s, err)
return err
}
// Handle Delete TagTable
qTag := sq.Delete("tag").Where("tag.id = ?", tagID)
if _, err := qTag.RunWith(r.stmtCache).Exec(); err != nil {
s, _, _ := qTag.ToSql()
log.Errorf("Error removing tag from table 'tag' with %s: %v", s, err)
return err
}
return nil
}
// CreateTag creates a new tag with the specified type and name and returns its database id.
func (r *JobRepository) CreateTag(tagType string, tagName string, tagScope string) (tagId int64, err error) {
// Default to "Global" scope if none defined
@ -209,6 +302,16 @@ func (r *JobRepository) TagId(tagType string, tagName string, tagScope string) (
return
}
// TagInfo returns the database infos of the tag with the specified id.
func (r *JobRepository) TagInfo(tagId int64) (tagType string, tagName string, tagScope string, exists bool) {
exists = true
if err := sq.Select("tag.tag_type", "tag.tag_name", "tag.tag_scope").From("tag").Where("tag.id = ?", tagId).
RunWith(r.stmtCache).QueryRow().Scan(&tagType, &tagName, &tagScope); err != nil {
exists = false
}
return
}
// GetTags returns a list of all scoped tags if job is nil or of the tags that the job with that database ID has.
func (r *JobRepository) GetTags(user *schema.User, job *int64) ([]*schema.Tag, error) {
q := sq.Select("id", "tag_type", "tag_name", "tag_scope").From("tag")

View File

@ -35,7 +35,7 @@ func GetUserCfgRepo() *UserCfgRepo {
lookupConfigStmt, err := db.DB.Preparex(`SELECT confkey, value FROM configuration WHERE configuration.username = ?`)
if err != nil {
log.Fatalf("db.DB.Preparex() error: %v", err)
log.Fatalf("User Config: Call 'db.DB.Preparex()' failed.\nError: %s\n", err.Error())
}
userCfgRepoInstance = &UserCfgRepo{

View File

@ -25,6 +25,9 @@ func setupUserTest(t *testing.T) *UserCfgRepo {
"jwts": {
"max-age": "2m"
},
"apiAllowedIPs": [
"*"
],
"clusters": [
{
"name": "testcluster",

View File

@ -40,7 +40,7 @@ func Start() {
jobRepo = repository.GetJobRepository()
s, err = gocron.NewScheduler()
if err != nil {
log.Fatalf("Error while creating gocron scheduler: %s", err.Error())
log.Abortf("Taskmanager Start: Could not create gocron scheduler.\nError: %s\n", err.Error())
}
if config.Keys.StopJobsExceedingWalltime > 0 {

View File

@ -27,6 +27,8 @@ type ArchiveBackend interface {
LoadJobData(job *schema.Job) (schema.JobData, error)
LoadJobStats(job *schema.Job) (schema.ScopedJobStats, error)
LoadClusterCfg(name string) (*schema.Cluster, error)
StoreJobMeta(jobMeta *schema.JobMeta) error
@ -87,7 +89,7 @@ func Init(rawConfig json.RawMessage, disableArchive bool) error {
var version uint64
version, err = ar.Init(rawConfig)
if err != nil {
log.Error("Error while initializing archiveBackend")
log.Errorf("Error while initializing archiveBackend: %s", err.Error())
return
}
log.Infof("Load archive version %d", version)
@ -110,7 +112,7 @@ func LoadAveragesFromArchive(
) error {
metaFile, err := ar.LoadJobMeta(job)
if err != nil {
log.Warn("Error while loading job metadata from archiveBackend")
log.Errorf("Error while loading job metadata from archiveBackend: %s", err.Error())
return err
}
@ -125,7 +127,7 @@ func LoadAveragesFromArchive(
return nil
}
// Helper to metricdataloader.LoadStatData().
// Helper to metricdataloader.LoadJobStats().
func LoadStatsFromArchive(
job *schema.Job,
metrics []string,
@ -133,7 +135,7 @@ func LoadStatsFromArchive(
data := make(map[string]schema.MetricStatistics, len(metrics))
metaFile, err := ar.LoadJobMeta(job)
if err != nil {
log.Warn("Error while loading job metadata from archiveBackend")
log.Errorf("Error while loading job metadata from archiveBackend: %s", err.Error())
return data, err
}
@ -154,10 +156,26 @@ func LoadStatsFromArchive(
return data, nil
}
// Helper to metricdataloader.LoadScopedJobStats().
func LoadScopedStatsFromArchive(
job *schema.Job,
metrics []string,
scopes []schema.MetricScope,
) (schema.ScopedJobStats, error) {
data, err := ar.LoadJobStats(job)
if err != nil {
log.Errorf("Error while loading job stats from archiveBackend: %s", err.Error())
return nil, err
}
return data, nil
}
func GetStatistics(job *schema.Job) (map[string]schema.JobStatistics, error) {
metaFile, err := ar.LoadJobMeta(job)
if err != nil {
log.Warn("Error while loading job metadata from archiveBackend")
log.Errorf("Error while loading job metadata from archiveBackend: %s", err.Error())
return nil, err
}
@ -173,7 +191,7 @@ func UpdateMetadata(job *schema.Job, metadata map[string]string) error {
jobMeta, err := ar.LoadJobMeta(job)
if err != nil {
log.Warn("Error while loading job metadata from archiveBackend")
log.Errorf("Error while loading job metadata from archiveBackend: %s", err.Error())
return err
}
@ -193,7 +211,7 @@ func UpdateTags(job *schema.Job, tags []*schema.Tag) error {
jobMeta, err := ar.LoadJobMeta(job)
if err != nil {
log.Warn("Error while loading job metadata from archiveBackend")
log.Errorf("Error while loading job metadata from archiveBackend: %s", err.Error())
return err
}

View File

@ -68,8 +68,23 @@ func initClusterConfig() error {
}
for _, sc := range cluster.SubClusters {
newMetric := mc
newMetric.SubClusters = nil
newMetric := &schema.MetricConfig{
Unit: mc.Unit,
Energy: mc.Energy,
Name: mc.Name,
Scope: mc.Scope,
Aggregation: mc.Aggregation,
Peak: mc.Peak,
Caution: mc.Caution,
Alert: mc.Alert,
Timestep: mc.Timestep,
Normal: mc.Normal,
LowerIsBetter: mc.LowerIsBetter,
}
if mc.Footprint != "" {
newMetric.Footprint = mc.Footprint
}
if cfg, ok := scLookup[sc.Name]; ok {
if !cfg.Remove {

View File

@ -115,6 +115,40 @@ func loadJobData(filename string, isCompressed bool) (schema.JobData, error) {
}
}
func loadJobStats(filename string, isCompressed bool) (schema.ScopedJobStats, error) {
f, err := os.Open(filename)
if err != nil {
log.Errorf("fsBackend LoadJobStats()- %v", err)
return nil, err
}
defer f.Close()
if isCompressed {
r, err := gzip.NewReader(f)
if err != nil {
log.Errorf(" %v", err)
return nil, err
}
defer r.Close()
if config.Keys.Validate {
if err := schema.Validate(schema.Data, r); err != nil {
return nil, fmt.Errorf("validate job data: %v", err)
}
}
return DecodeJobStats(r, filename)
} else {
if config.Keys.Validate {
if err := schema.Validate(schema.Data, bufio.NewReader(f)); err != nil {
return nil, fmt.Errorf("validate job data: %v", err)
}
}
return DecodeJobStats(bufio.NewReader(f), filename)
}
}
func (fsa *FsArchive) Init(rawConfig json.RawMessage) (uint64, error) {
var config FsArchiveConfig
@ -389,6 +423,18 @@ func (fsa *FsArchive) LoadJobData(job *schema.Job) (schema.JobData, error) {
return loadJobData(filename, isCompressed)
}
func (fsa *FsArchive) LoadJobStats(job *schema.Job) (schema.ScopedJobStats, error) {
var isCompressed bool = true
filename := getPath(job, fsa.path, "data.json.gz")
if !util.CheckFileExists(filename) {
filename = getPath(job, fsa.path, "data.json")
isCompressed = false
}
return loadJobStats(filename, isCompressed)
}
func (fsa *FsArchive) LoadJobMeta(job *schema.Job) (*schema.JobMeta, error) {
filename := getPath(job, fsa.path, "meta.json")
return loadJobMeta(filename)

View File

@ -32,6 +32,43 @@ func DecodeJobData(r io.Reader, k string) (schema.JobData, error) {
return data.(schema.JobData), nil
}
func DecodeJobStats(r io.Reader, k string) (schema.ScopedJobStats, error) {
jobData, err := DecodeJobData(r, k)
// Convert schema.JobData to schema.ScopedJobStats
if jobData != nil {
scopedJobStats := make(schema.ScopedJobStats)
for metric, metricData := range jobData {
if _, ok := scopedJobStats[metric]; !ok {
scopedJobStats[metric] = make(map[schema.MetricScope][]*schema.ScopedStats)
}
for scope, jobMetric := range metricData {
if _, ok := scopedJobStats[metric][scope]; !ok {
scopedJobStats[metric][scope] = make([]*schema.ScopedStats, 0)
}
for _, series := range jobMetric.Series {
scopedJobStats[metric][scope] = append(scopedJobStats[metric][scope], &schema.ScopedStats{
Hostname: series.Hostname,
Id: series.Id,
Data: &series.Statistics,
})
}
// So that one can later check len(scopedJobStats[metric][scope]): Remove from map if empty
if len(scopedJobStats[metric][scope]) == 0 {
delete(scopedJobStats[metric], scope)
if len(scopedJobStats[metric]) == 0 {
delete(scopedJobStats, metric)
}
}
}
}
return scopedJobStats, nil
}
return nil, err
}
func DecodeJobMeta(r io.Reader) (*schema.JobMeta, error) {
var d schema.JobMeta
if err := json.NewDecoder(r).Decode(&d); err != nil {

View File

@ -46,12 +46,12 @@ var loglevel string = "info"
/* CONFIG */
func Init(lvl string, logdate bool) {
// Discard I/O for all writers below selected loglevel; <CRITICAL> is always written.
switch lvl {
case "crit":
ErrWriter = io.Discard
fallthrough
case "err", "fatal":
case "err":
WarnWriter = io.Discard
fallthrough
case "warn":
@ -63,8 +63,7 @@ func Init(lvl string, logdate bool) {
// Nothing to do...
break
default:
fmt.Printf("pkg/log: Flag 'loglevel' has invalid value %#v\npkg/log: Will use default loglevel 'debug'\n", lvl)
//SetLogLevel("debug")
fmt.Printf("pkg/log: Flag 'loglevel' has invalid value %#v\npkg/log: Will use default loglevel '%s'\n", lvl, loglevel)
}
if !logdate {
@ -84,109 +83,138 @@ func Init(lvl string, logdate bool) {
loglevel = lvl
}
/* PRINT */
// Private helper
func printStr(v ...interface{}) string {
return fmt.Sprint(v...)
}
// Uses Info() -> If errorpath required at some point:
// Will need own writer with 'Output(2, out)' to correctly render path
func Print(v ...interface{}) {
Info(v...)
}
func Debug(v ...interface{}) {
DebugLog.Output(2, printStr(v...))
}
func Info(v ...interface{}) {
InfoLog.Output(2, printStr(v...))
}
func Warn(v ...interface{}) {
WarnLog.Output(2, printStr(v...))
}
func Error(v ...interface{}) {
ErrLog.Output(2, printStr(v...))
}
// Writes panic stacktrace, but keeps application alive
func Panic(v ...interface{}) {
ErrLog.Output(2, printStr(v...))
panic("Panic triggered ...")
}
func Crit(v ...interface{}) {
CritLog.Output(2, printStr(v...))
}
// Writes critical log, stops application
func Fatal(v ...interface{}) {
CritLog.Output(2, printStr(v...))
os.Exit(1)
}
/* PRINT FORMAT*/
// Private helper
func printfStr(format string, v ...interface{}) string {
return fmt.Sprintf(format, v...)
}
// Uses Infof() -> If errorpath required at some point:
// Will need own writer with 'Output(2, out)' to correctly render path
func Printf(format string, v ...interface{}) {
Infof(format, v...)
}
func Debugf(format string, v ...interface{}) {
DebugLog.Output(2, printfStr(format, v...))
}
func Infof(format string, v ...interface{}) {
InfoLog.Output(2, printfStr(format, v...))
}
func Warnf(format string, v ...interface{}) {
WarnLog.Output(2, printfStr(format, v...))
}
func Errorf(format string, v ...interface{}) {
ErrLog.Output(2, printfStr(format, v...))
}
// Writes panic stacktrace, but keeps application alive
func Panicf(format string, v ...interface{}) {
ErrLog.Output(2, printfStr(format, v...))
panic("Panic triggered ...")
}
func Critf(format string, v ...interface{}) {
CritLog.Output(2, printfStr(format, v...))
}
// Writes crit log, stops application
func Fatalf(format string, v ...interface{}) {
CritLog.Output(2, printfStr(format, v...))
os.Exit(1)
}
/* HELPER */
func Loglevel() string {
return loglevel
}
/* SPECIAL */
/* PRIVATE HELPER */
// func Finfof(w io.Writer, format string, v ...interface{}) {
// if w != io.Discard {
// if logDateTime {
// currentTime := time.Now()
// fmt.Fprintf(InfoWriter, currentTime.String()+InfoPrefix+format+"\n", v...)
// } else {
// fmt.Fprintf(InfoWriter, InfoPrefix+format+"\n", v...)
// }
// }
// }
// Return unformatted string
func printStr(v ...interface{}) string {
return fmt.Sprint(v...)
}
// Return formatted string
func printfStr(format string, v ...interface{}) string {
return fmt.Sprintf(format, v...)
}
/* PRINT */
// Prints to STDOUT without string formatting; application continues.
// Used for special cases not requiring log information like date or location.
func Print(v ...interface{}) {
fmt.Fprintln(os.Stdout, v...)
}
// Prints to STDOUT without string formatting; application exits with error code 0.
// Used for exiting succesfully with message after expected outcome, e.g. successful single-call application runs.
func Exit(v ...interface{}) {
fmt.Fprintln(os.Stdout, v...)
os.Exit(0)
}
// Prints to STDOUT without string formatting; application exits with error code 1.
// Used for terminating with message after to be expected errors, e.g. wrong arguments or during init().
func Abort(v ...interface{}) {
fmt.Fprintln(os.Stdout, v...)
os.Exit(1)
}
// Prints to DEBUG writer without string formatting; application continues.
// Used for logging additional information, primarily for development.
func Debug(v ...interface{}) {
DebugLog.Output(2, printStr(v...))
}
// Prints to INFO writer without string formatting; application continues.
// Used for logging additional information, e.g. notable returns or common fail-cases.
func Info(v ...interface{}) {
InfoLog.Output(2, printStr(v...))
}
// Prints to WARNING writer without string formatting; application continues.
// Used for logging important information, e.g. uncommon edge-cases or administration related information.
func Warn(v ...interface{}) {
WarnLog.Output(2, printStr(v...))
}
// Prints to ERROR writer without string formatting; application continues.
// Used for logging errors, but code still can return default(s) or nil.
func Error(v ...interface{}) {
ErrLog.Output(2, printStr(v...))
}
// Prints to CRITICAL writer without string formatting; application exits with error code 1.
// Used for terminating on unexpected errors with date and code location.
func Fatal(v ...interface{}) {
CritLog.Output(2, printStr(v...))
os.Exit(1)
}
// Prints to PANIC function without string formatting; application exits with panic.
// Used for terminating on unexpected errors with stacktrace.
func Panic(v ...interface{}) {
panic(printStr(v...))
}
/* PRINT FORMAT*/
// Prints to STDOUT with string formatting; application continues.
// Used for special cases not requiring log information like date or location.
func Printf(format string, v ...interface{}) {
fmt.Fprintf(os.Stdout, format, v...)
}
// Prints to STDOUT with string formatting; application exits with error code 0.
// Used for exiting succesfully with message after expected outcome, e.g. successful single-call application runs.
func Exitf(format string, v ...interface{}) {
fmt.Fprintf(os.Stdout, format, v...)
os.Exit(0)
}
// Prints to STDOUT with string formatting; application exits with error code 1.
// Used for terminating with message after to be expected errors, e.g. wrong arguments or during init().
func Abortf(format string, v ...interface{}) {
fmt.Fprintf(os.Stdout, format, v...)
os.Exit(1)
}
// Prints to DEBUG writer with string formatting; application continues.
// Used for logging additional information, primarily for development.
func Debugf(format string, v ...interface{}) {
DebugLog.Output(2, printfStr(format, v...))
}
// Prints to INFO writer with string formatting; application continues.
// Used for logging additional information, e.g. notable returns or common fail-cases.
func Infof(format string, v ...interface{}) {
InfoLog.Output(2, printfStr(format, v...))
}
// Prints to WARNING writer with string formatting; application continues.
// Used for logging important information, e.g. uncommon edge-cases or administration related information.
func Warnf(format string, v ...interface{}) {
WarnLog.Output(2, printfStr(format, v...))
}
// Prints to ERROR writer with string formatting; application continues.
// Used for logging errors, but code still can return default(s) or nil.
func Errorf(format string, v ...interface{}) {
ErrLog.Output(2, printfStr(format, v...))
}
// Prints to CRITICAL writer with string formatting; application exits with error code 1.
// Used for terminating on unexpected errors with date and code location.
func Fatalf(format string, v ...interface{}) {
CritLog.Output(2, printfStr(format, v...))
os.Exit(1)
}
// Prints to PANIC function with string formatting; application exits with panic.
// Used for terminating on unexpected errors with stacktrace.
func Panicf(format string, v ...interface{}) {
panic(printfStr(format, v...))
}

View File

@ -100,7 +100,7 @@ type ProgramConfig struct {
// Address where the http (or https) server will listen on (for example: 'localhost:80').
Addr string `json:"addr"`
// Addresses from which secured API endpoints can be reached
// Addresses from which secured admin API endpoints can be reached, can be wildcard "*"
ApiAllowedIPs []string `json:"apiAllowedIPs"`
// Drop root permissions once .env was read and the port was taken.

View File

@ -15,6 +15,7 @@ import (
)
type JobData map[string]map[MetricScope]*JobMetric
type ScopedJobStats map[string]map[MetricScope][]*ScopedStats
type JobMetric struct {
StatisticsSeries *StatsSeries `json:"statisticsSeries,omitempty"`
@ -30,6 +31,12 @@ type Series struct {
Statistics MetricStatistics `json:"statistics"`
}
type ScopedStats struct {
Hostname string `json:"hostname"`
Id *string `json:"id,omitempty"`
Data *MetricStatistics `json:"data"`
}
type MetricStatistics struct {
Avg float64 `json:"avg"`
Min float64 `json:"min"`

View File

@ -25,10 +25,18 @@
},
"scope": {
"description": "Native measurement resolution",
"type": "string"
"type": "string",
"enum": [
"node",
"socket",
"memoryDomain",
"core",
"hwthread",
"accelerator"
]
},
"timestep": {
"description": "Frequency of timeseries points",
"description": "Frequency of timeseries points in seconds",
"type": "integer"
},
"aggregation": {
@ -108,15 +116,19 @@
"type": "boolean"
},
"peak": {
"description": "The maximum possible metric value",
"type": "number"
},
"normal": {
"description": "A common metric value level",
"type": "number"
},
"caution": {
"description": "Metric value requires attention",
"type": "number"
},
"alert": {
"description": "Metric value requiring immediate attention",
"type": "number"
},
"remove": {

View File

@ -492,6 +492,7 @@
},
"required": [
"jwts",
"clusters"
"clusters",
"apiAllowedIPs"
]
}

View File

@ -17,6 +17,7 @@
"IPC",
"Hz",
"W",
"J",
"°C",
""
]

View File

@ -85,6 +85,7 @@ func IsValidRole(role string) bool {
return getRoleEnum(role) != RoleError
}
// Check if User has SPECIFIED role AND role is VALID
func (u *User) HasValidRole(role string) (hasRole bool, isValid bool) {
if IsValidRole(role) {
for _, r := range u.Roles {
@ -97,6 +98,7 @@ func (u *User) HasValidRole(role string) (hasRole bool, isValid bool) {
return false, false
}
// Check if User has SPECIFIED role
func (u *User) HasRole(role Role) bool {
for _, r := range u.Roles {
if r == GetRoleString(role) {
@ -106,7 +108,7 @@ func (u *User) HasRole(role Role) bool {
return false
}
// Role-Arrays are short: performance not impacted by nested loop
// Check if User has ANY of the listed roles
func (u *User) HasAnyRole(queryroles []Role) bool {
for _, ur := range u.Roles {
for _, qr := range queryroles {
@ -118,7 +120,7 @@ func (u *User) HasAnyRole(queryroles []Role) bool {
return false
}
// Role-Arrays are short: performance not impacted by nested loop
// Check if User has ALL of the listed roles
func (u *User) HasAllRoles(queryroles []Role) bool {
target := len(queryroles)
matches := 0
@ -138,7 +140,7 @@ func (u *User) HasAllRoles(queryroles []Role) bool {
}
}
// Role-Arrays are short: performance not impacted by nested loop
// Check if User has NONE of the listed roles
func (u *User) HasNotRoles(queryroles []Role) bool {
matches := 0
for _, ur := range u.Roles {

View File

@ -14,17 +14,20 @@ func TestValidateConfig(t *testing.T) {
"jwts": {
"max-age": "2m"
},
"clusters": [
{
"name": "testcluster",
"metricDataRepository": {
"kind": "cc-metric-store",
"url": "localhost:8082"},
"filterRanges": {
"numNodes": { "from": 1, "to": 64 },
"duration": { "from": 0, "to": 86400 },
"startTime": { "from": "2022-01-01T00:00:00Z", "to": null }
}}]
"apiAllowedIPs": [
"*"
],
"clusters": [
{
"name": "testcluster",
"metricDataRepository": {
"kind": "cc-metric-store",
"url": "localhost:8082"},
"filterRanges": {
"numNodes": { "from": 1, "to": 64 },
"duration": { "from": 0, "to": 86400 },
"startTime": { "from": "2022-01-01T00:00:00Z", "to": null }
}}]
}`)
if err := Validate(Config, bytes.NewReader(json)); err != nil {
@ -33,7 +36,6 @@ func TestValidateConfig(t *testing.T) {
}
func TestValidateJobMeta(t *testing.T) {
}
func TestValidateCluster(t *testing.T) {

View File

@ -22,8 +22,7 @@ func parseDate(in string) int64 {
if in != "" {
t, err := time.ParseInLocation(shortForm, in, loc)
if err != nil {
fmt.Printf("date parse error %v", err)
os.Exit(0)
log.Abortf("Archive Manager Main: Date parse failed with input: '%s'\nError: %s\n", in, err.Error())
}
return t.Unix()
}

View File

@ -59,9 +59,9 @@
}
},
"node_modules/@babel/runtime": {
"version": "7.26.0",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.0.tgz",
"integrity": "sha512-FDSOghenHTiToteC/QRlv2q3DhPZ/oOXTBoirfWNx1Cx3TMVcGWQtMMmQcSvb/JjpNeGzx8Pq/b4fKEJuWm1sw==",
"version": "7.27.0",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.27.0.tgz",
"integrity": "sha512-VtPOkrdPHZsKc/clNqyi9WUA8TINkZ4cGk63UUE3u4pmB2k+ZMQRDuIOagv8UVd6j7k0T3+RRIb7beKTebNbcw==",
"license": "MIT",
"dependencies": {
"regenerator-runtime": "^0.14.0"

View File

@ -62,6 +62,7 @@ export default [
entrypoint('jobs', 'src/jobs.entrypoint.js'),
entrypoint('user', 'src/user.entrypoint.js'),
entrypoint('list', 'src/list.entrypoint.js'),
entrypoint('taglist', 'src/tags.entrypoint.js'),
entrypoint('job', 'src/job.entrypoint.js'),
entrypoint('systems', 'src/systems.entrypoint.js'),
entrypoint('node', 'src/node.entrypoint.js'),

View File

@ -26,6 +26,8 @@
init,
convert2uplot,
binsFromFootprint,
scramble,
scrambleNames,
} from "./generic/utils.js";
import PlotSelection from "./analysis/PlotSelection.svelte";
import Filters from "./generic/Filters.svelte";
@ -395,7 +397,7 @@
quantities={$topQuery.data.topList.map(
(t) => t[sortSelection.key],
)}
entities={$topQuery.data.topList.map((t) => t.id)}
entities={$topQuery.data.topList.map((t) => scrambleNames ? scramble(t.id) : t.id)}
/>
{/if}
{/key}
@ -428,21 +430,21 @@
{#if groupSelection.key == "user"}
<th scope="col" id="topName-{te.id}"
><a href="/monitoring/user/{te.id}?cluster={clusterName}"
>{te.id}</a
>{scrambleNames ? scramble(te.id) : te.id}</a
></th
>
{#if te?.name}
<Tooltip
target={`topName-${te.id}`}
placement="left"
>{te.name}</Tooltip
>{scrambleNames ? scramble(te.name) : te.name}</Tooltip
>
{/if}
{:else}
<th scope="col"
><a
href="/monitoring/jobs/?cluster={clusterName}&project={te.id}&projectMatch=eq"
>{te.id}</a
>{scrambleNames ? scramble(te.id) : te.id}</a
></th
>
{/if}

View File

@ -40,7 +40,7 @@
import JobRoofline from "./job/JobRoofline.svelte";
import EnergySummary from "./job/EnergySummary.svelte";
import PlotGrid from "./generic/PlotGrid.svelte";
import StatsTable from "./job/StatsTable.svelte";
import StatsTab from "./job/StatsTab.svelte";
export let dbid;
export let username;
@ -53,10 +53,8 @@
let isMetricsSelectionOpen = false,
selectedMetrics = [],
selectedScopes = [];
let plots = {},
statsTable
selectedScopes = [],
plots = {};
let availableMetrics = new Set(),
missingMetrics = [],
@ -127,28 +125,16 @@
let job = $initq.data.job;
if (!job) return;
const pendingMetrics = [
...(
(
ccconfig[`job_view_selectedMetrics:${job.cluster}:${job.subCluster}`] ||
ccconfig[`job_view_selectedMetrics:${job.cluster}`]
) ||
$initq.data.globalMetrics
.reduce((names, gm) => {
if (gm.availability.find((av) => av.cluster === job.cluster && av.subClusters.includes(job.subCluster))) {
names.push(gm.name);
}
return names;
}, [])
),
...(
(
ccconfig[`job_view_nodestats_selectedMetrics:${job.cluster}:${job.subCluster}`] ||
ccconfig[`job_view_nodestats_selectedMetrics:${job.cluster}`]
) ||
ccconfig[`job_view_nodestats_selectedMetrics`]
),
];
const pendingMetrics = (
ccconfig[`job_view_selectedMetrics:${job.cluster}:${job.subCluster}`] ||
ccconfig[`job_view_selectedMetrics:${job.cluster}`]
) ||
$initq.data.globalMetrics.reduce((names, gm) => {
if (gm.availability.find((av) => av.cluster === job.cluster && av.subClusters.includes(job.subCluster))) {
names.push(gm.name);
}
return names;
}, [])
// Select default Scopes to load: Check before if any metric has accelerator scope by default
const accScopeDefault = [...pendingMetrics].some(function (m) {
@ -231,7 +217,7 @@
<Col xs={12} md={6} xl={3} class="mb-3 mb-xxl-0">
{#if $initq.error}
<Card body color="danger">{$initq.error.message}</Card>
{:else if $initq.data}
{:else if $initq?.data}
<Card class="overflow-auto" style="height: 400px;">
<TabContent> <!-- on:tab={(e) => (status = e.detail)} -->
{#if $initq.data?.job?.metaData?.message}
@ -305,7 +291,7 @@
<Card class="mb-3">
<CardBody>
<Row class="mb-2">
{#if $initq.data}
{#if $initq?.data}
<Col xs="auto">
<Button outline on:click={() => (isMetricsSelectionOpen = true)} color="primary">
Select Metrics (Selected {selectedMetrics.length} of {availableMetrics.size} available)
@ -318,7 +304,7 @@
{#if $jobMetrics.error}
<Row class="mt-2">
<Col>
{#if $initq.data.job.monitoringStatus == 0 || $initq.data.job.monitoringStatus == 2}
{#if $initq?.data && ($initq.data.job?.monitoringStatus == 0 || $initq.data.job?.monitoringStatus == 2)}
<Card body color="warning">Not monitored or archiving failed</Card>
<br />
{/if}
@ -343,7 +329,6 @@
{#if item.data}
<Metric
bind:this={plots[item.metric]}
on:more-loaded={({ detail }) => statsTable.moreLoaded(detail)}
job={$initq.data.job}
metricName={item.metric}
metricUnit={$initq.data.globalMetrics.find((gm) => gm.name == item.metric)?.unit}
@ -352,10 +337,25 @@
scopes={item.data.map((x) => x.scope)}
isShared={$initq.data.job.exclusive != 1}
/>
{:else if item.disabled == true}
<Card color="info">
<CardHeader class="mb-0">
<b>Disabled Metric</b>
</CardHeader>
<CardBody>
<p>Metric <b>{item.metric}</b> is disabled for subcluster <b>{$initq.data.job.subCluster}</b>.</p>
<p class="mb-1">To remove this card, open metric selection and press "Close and Apply".</p>
</CardBody>
</Card>
{:else}
<Card body color="warning" class="mt-2"
>No dataset returned for <code>{item.metric}</code></Card
>
<Card color="warning" class="mt-2">
<CardHeader class="mb-0">
<b>Missing Metric</b>
</CardHeader>
<CardBody>
<p class="mb-1">No dataset returned for <b>{item.metric}</b>.</p>
</CardBody>
</Card>
{/if}
</PlotGrid>
{/if}
@ -365,7 +365,7 @@
<!-- Statistcics Table -->
<Row class="mb-3">
<Col>
{#if $initq.data}
{#if $initq?.data}
<Card>
<TabContent>
{#if somethingMissing}
@ -398,22 +398,8 @@
</div>
</TabPane>
{/if}
<TabPane
tabId="stats"
tab="Statistics Table"
class="overflow-x-auto"
active={!somethingMissing}
>
{#if $jobMetrics?.data?.jobMetrics}
{#key $jobMetrics.data.jobMetrics}
<StatsTable
bind:this={statsTable}
job={$initq.data.job}
jobMetrics={$jobMetrics.data.jobMetrics}
/>
{/key}
{/if}
</TabPane>
<!-- Includes <TabPane> Statistics Table with Independent GQL Query -->
<StatsTab job={$initq.data.job} clusters={$initq.data.clusters} tabActive={!somethingMissing}/>
<TabPane tabId="job-script" tab="Job Script">
<div class="pre-wrapper">
{#if $initq.data.job.metaData?.jobScript}
@ -440,7 +426,7 @@
</Col>
</Row>
{#if $initq.data}
{#if $initq?.data}
<MetricSelection
cluster={$initq.data.job.cluster}
subCluster={$initq.data.job.subCluster}

View File

@ -31,6 +31,8 @@
init,
convert2uplot,
transformPerNodeDataForRoofline,
scramble,
scrambleNames,
} from "./generic/utils.js";
import { scaleNumbers } from "./generic/units.js";
import PlotGrid from "./generic/PlotGrid.svelte";
@ -486,7 +488,7 @@
quantities={$topUserQuery.data.topUser.map(
(tu) => tu[topUserSelection.key],
)}
entities={$topUserQuery.data.topUser.map((tu) => tu.id)}
entities={$topUserQuery.data.topUser.map((tu) => scrambleNames ? scramble(tu.id) : tu.id)}
/>
{/if}
{/key}
@ -520,14 +522,14 @@
<th scope="col" id="topName-{tu.id}"
><a
href="/monitoring/user/{tu.id}?cluster={cluster}&state=running"
>{tu.id}</a
>{scrambleNames ? scramble(tu.id) : tu.id}</a
></th
>
{#if tu?.name}
<Tooltip
target={`topName-${tu.id}`}
placement="left"
>{tu.name}</Tooltip
>{scrambleNames ? scramble(tu.name) : tu.name}</Tooltip
>
{/if}
<td>{tu[topUserSelection.key]}</td>
@ -553,7 +555,7 @@
quantities={$topProjectQuery.data.topProjects.map(
(tp) => tp[topProjectSelection.key],
)}
entities={$topProjectQuery.data.topProjects.map((tp) => tp.id)}
entities={$topProjectQuery.data.topProjects.map((tp) => scrambleNames ? scramble(tp.id) : tp.id)}
/>
{/if}
{/key}
@ -586,7 +588,7 @@
<th scope="col"
><a
href="/monitoring/jobs/?cluster={cluster}&state=running&project={tp.id}&projectMatch=eq"
>{tp.id}</a
>{scrambleNames ? scramble(tp.id) : tp.id}</a
></th
>
<td>{tp[topProjectSelection.key]}</td>

View File

@ -0,0 +1,110 @@
<!--
@component Tag List Svelte Component. Displays All Tags, Allows deletion.
Properties:
- `username String!`: Users username.
- `isAdmin Bool!`: User has Admin Auth.
- `tagmap Object!`: Map of accessible, appwide tags. Prefiltered in backend.
-->
<script>
import {
gql,
getContextClient,
mutationStore,
} from "@urql/svelte";
import {
Badge,
InputGroup,
Icon,
Button,
Spinner,
} from "@sveltestrap/sveltestrap";
import {
init,
} from "./generic/utils.js";
export let username;
export let isAdmin;
export let tagmap;
const {} = init();
const client = getContextClient();
let pendingChange = "none";
const removeTagMutation = ({ tagIds }) => {
return mutationStore({
client: client,
query: gql`
mutation ($tagIds: [ID!]!) {
removeTagFromList(tagIds: $tagIds)
}
`,
variables: { tagIds },
});
};
function removeTag(tag, tagType) {
if (confirm("Are you sure you want to completely remove this tag?\n\n" + tagType + ':' + tag.name)) {
pendingChange = tagType;
removeTagMutation({tagIds: [tag.id] }).subscribe(
(res) => {
if (res.fetching === false && !res.error) {
tagmap[tagType] = tagmap[tagType].filter((t) => !res.data.removeTagFromList.includes(t.id));
if (tagmap[tagType].length === 0) {
delete tagmap[tagType]
}
pendingChange = "none";
} else if (res.fetching === false && res.error) {
throw res.error;
}
},
);
}
}
</script>
<div class="container">
<div class="row justify-content-center">
<div class="col-10">
{#each Object.entries(tagmap) as [tagType, tagList]}
<div class="my-3 p-2 bg-secondary rounded text-white"> <!-- text-capitalize -->
Tag Type: <b>{tagType}</b>
{#if pendingChange === tagType}
<Spinner size="sm" secondary />
{/if}
<span style="float: right; padding-bottom: 0.4rem; padding-top: 0.4rem;" class="badge bg-light text-secondary">
{tagList.length} Tag{(tagList.length != 1)?'s':''}
</span>
</div>
<div class="d-inline-flex flex-wrap">
{#each tagList as tag (tag.id)}
<InputGroup class="w-auto flex-nowrap" style="margin-right: 0.5rem; margin-bottom: 0.5rem;">
<Button outline color="secondary" href="/monitoring/jobs/?tag={tag.id}" target="_blank">
<Badge color="light" style="font-size:medium;" border>{tag.name}</Badge> :
<Badge color="primary" pill>{tag.count} Job{(tag.count != 1)?'s':''}</Badge>
{#if tag.scope == "global"}
<Badge style="background-color:#c85fc8 !important;" pill>Global</Badge>
{:else if tag.scope == "admin"}
<Badge style="background-color:#19e5e6 !important;" pill>Admin</Badge>
{:else}
<Badge color="warning" pill>Private</Badge>
{/if}
</Button>
{#if (isAdmin && (tag.scope == "admin" || tag.scope == "global")) || tag.scope == username }
<Button
size="sm"
color="danger"
on:click={() => removeTag(tag, tagType)}
>
<Icon name="x" />
</Button>
{/if}
</InputGroup>
{/each}
</div>
{/each}
</div>
</div>
</div>

View File

@ -53,10 +53,16 @@
{ range: "last30d", rangeLabel: "Last 30 days"}
];
const nodeMatchLabels = {
eq: "",
contains: " Contains",
}
let filters = {
projectMatch: filterPresets.projectMatch || "contains",
userMatch: filterPresets.userMatch || "contains",
jobIdMatch: filterPresets.jobIdMatch || "eq",
nodeMatch: filterPresets.nodeMatch || "eq",
cluster: filterPresets.cluster || null,
partition: filterPresets.partition || null,
@ -106,7 +112,7 @@
let items = [];
if (filters.cluster) items.push({ cluster: { eq: filters.cluster } });
if (filters.node) items.push({ node: { contains: filters.node } });
if (filters.node) items.push({ node: { [filters.nodeMatch]: filters.node } });
if (filters.partition) items.push({ partition: { eq: filters.partition } });
if (filters.states.length != allJobStates.length)
items.push({ state: filters.states });
@ -178,6 +184,8 @@
let opts = [];
if (filters.cluster) opts.push(`cluster=${filters.cluster}`);
if (filters.node) opts.push(`node=${filters.node}`);
if (filters.node && filters.nodeMatch != "eq") // "eq" is default-case
opts.push(`nodeMatch=${filters.nodeMatch}`);
if (filters.partition) opts.push(`partition=${filters.partition}`);
if (filters.states.length != allJobStates.length)
for (let state of filters.states) opts.push(`state=${state}`);
@ -196,7 +204,7 @@
opts.push(`jobId=${singleJobId}`);
}
if (filters.jobIdMatch != "eq")
opts.push(`jobIdMatch=${filters.jobIdMatch}`);
opts.push(`jobIdMatch=${filters.jobIdMatch}`); // "eq" is default-case
for (let tag of filters.tags) opts.push(`tag=${tag}`);
if (filters.duration.from && filters.duration.to)
opts.push(`duration=${filters.duration.from}-${filters.duration.to}`);
@ -218,13 +226,13 @@
} else {
for (let singleUser of filters.user) opts.push(`user=${singleUser}`);
}
if (filters.userMatch != "contains")
if (filters.userMatch != "contains") // "contains" is default-case
opts.push(`userMatch=${filters.userMatch}`);
if (filters.project) opts.push(`project=${filters.project}`);
if (filters.project && filters.projectMatch != "contains") // "contains" is default-case
opts.push(`projectMatch=${filters.projectMatch}`);
if (filters.jobName) opts.push(`jobName=${filters.jobName}`);
if (filters.arrayJobId) opts.push(`arrayJobId=${filters.arrayJobId}`);
if (filters.project && filters.projectMatch != "contains")
opts.push(`projectMatch=${filters.projectMatch}`);
if (filters.stats.length != 0)
for (let stat of filters.stats) {
opts.push(`stat=${stat.field}-${stat.from}-${stat.to}`);
@ -386,7 +394,7 @@
{#if filters.node != null}
<Info icon="hdd-stack" on:click={() => (isResourcesOpen = true)}>
Node: {filters.node}
Node{nodeMatchLabels[filters.nodeMatch]}: {filters.node}
</Info>
{/if}
@ -449,6 +457,7 @@
bind:numHWThreads={filters.numHWThreads}
bind:numAccelerators={filters.numAccelerators}
bind:namedNode={filters.node}
bind:nodeMatch={filters.nodeMatch}
bind:isNodesModified
bind:isHwthreadsModified
bind:isAccsModified

View File

@ -24,6 +24,7 @@
ModalBody,
ModalHeader,
ModalFooter,
Input
} from "@sveltestrap/sveltestrap";
import DoubleRangeSlider from "../select/DoubleRangeSlider.svelte";
@ -40,11 +41,18 @@
export let isHwthreadsModified = false;
export let isAccsModified = false;
export let namedNode = null;
export let nodeMatch = "eq"
let pendingNumNodes = numNodes,
pendingNumHWThreads = numHWThreads,
pendingNumAccelerators = numAccelerators,
pendingNamedNode = namedNode;
pendingNamedNode = namedNode,
pendingNodeMatch = nodeMatch;
const nodeMatchLabels = {
eq: "Equal To",
contains: "Contains",
}
const findMaxNumAccels = (clusters) =>
clusters.reduce(
@ -145,7 +153,17 @@
<ModalHeader>Select number of utilized Resources</ModalHeader>
<ModalBody>
<h6>Named Node</h6>
<input type="text" class="form-control" bind:value={pendingNamedNode} />
<div class="d-flex">
<Input type="text" class="w-75" bind:value={pendingNamedNode} />
<div class="mx-1"></div>
<Input type="select" class="w-25" bind:value={pendingNodeMatch}>
{#each Object.entries(nodeMatchLabels) as [nodeMatchKey, nodeMatchLabel]}
<option value={nodeMatchKey}>
{nodeMatchLabel}
</option>
{/each}
</Input>
</div>
<h6 style="margin-top: 1rem;">Number of Nodes</h6>
<DoubleRangeSlider
on:change={({ detail }) => {
@ -215,11 +233,13 @@
to: pendingNumAccelerators.to,
};
namedNode = pendingNamedNode;
nodeMatch = pendingNodeMatch;
dispatch("set-filter", {
numNodes,
numHWThreads,
numAccelerators,
namedNode,
nodeMatch
});
}}
>
@ -233,6 +253,7 @@
pendingNumHWThreads = { from: null, to: null };
pendingNumAccelerators = { from: null, to: null };
pendingNamedNode = null;
pendingNodeMatch = null;
numNodes = { from: pendingNumNodes.from, to: pendingNumNodes.to };
numHWThreads = {
from: pendingNumHWThreads.from,
@ -246,11 +267,13 @@
isHwthreadsModified = false;
isAccsModified = false;
namedNode = pendingNamedNode;
nodeMatch = pendingNodeMatch;
dispatch("set-filter", {
numNodes,
numHWThreads,
numAccelerators,
namedNode,
nodeMatch
});
}}>Reset</Button
>

View File

@ -14,7 +14,6 @@
<script>
import {
getContext,
createEventDispatcher
} from "svelte";
import {
queryStore,
@ -56,7 +55,6 @@
let pendingZoomState = null;
let thresholdState = null;
const dispatch = createEventDispatcher();
const statsPattern = /(.*)-stat$/;
const unit = (metricUnit?.prefix ? metricUnit.prefix : "") + (metricUnit?.base ? metricUnit.base : "");
const client = getContextClient();
@ -150,11 +148,6 @@
// On additional scope request
if (selectedScope == "load-all") {
// Push scope to statsTable (Needs to be in this case, else newly selected 'Metric.svelte' renders cause statsTable race condition)
const statsTableData = $metricData.data.singleUpdate.filter((x) => x.scope !== "node")
if (statsTableData.length > 0) {
dispatch("more-loaded", statsTableData);
}
// Set selected scope to min of returned scopes
selectedScope = minScope(scopes)
nodeOnly = (selectedScope == "node") // "node" still only scope after load-all

View File

@ -0,0 +1,145 @@
<!--
@component Job-View subcomponent; Wraps the statsTable in a TabPane and contains GQL query for scoped statsData
Properties:
- `job Object`: The job object
- `clusters Object`: The clusters object
- `tabActive bool`: Boolean if StatsTabe Tab is Active on Creation
-->
<script>
import {
queryStore,
gql,
getContextClient
} from "@urql/svelte";
import { getContext } from "svelte";
import {
Card,
Button,
Row,
Col,
TabPane,
Spinner,
Icon
} from "@sveltestrap/sveltestrap";
import MetricSelection from "../generic/select/MetricSelection.svelte";
import StatsTable from "./statstab/StatsTable.svelte";
export let job;
export let clusters;
export let tabActive;
let loadScopes = false;
let selectedScopes = [];
let selectedMetrics = [];
let availableMetrics = new Set(); // For Info Only, filled by MetricSelection Component
let isMetricSelectionOpen = false;
const client = getContextClient();
const query = gql`
query ($dbid: ID!, $selectedMetrics: [String!]!, $selectedScopes: [MetricScope!]!) {
scopedJobStats(id: $dbid, metrics: $selectedMetrics, scopes: $selectedScopes) {
name
scope
stats {
hostname
id
data {
min
avg
max
}
}
}
}
`;
$: scopedStats = queryStore({
client: client,
query: query,
variables: { dbid: job.id, selectedMetrics, selectedScopes },
});
$: if (loadScopes) {
selectedScopes = ["node", "socket", "core", "hwthread", "accelerator"];
}
// Handle Job Query on Init -> is not executed anymore
getContext("on-init")(() => {
if (!job) return;
const pendingMetrics = (
getContext("cc-config")[`job_view_nodestats_selectedMetrics:${job.cluster}:${job.subCluster}`] ||
getContext("cc-config")[`job_view_nodestats_selectedMetrics:${job.cluster}`]
) || getContext("cc-config")["job_view_nodestats_selectedMetrics"];
// Select default Scopes to load: Check before if any metric has accelerator scope by default
const accScopeDefault = [...pendingMetrics].some(function (m) {
const cluster = clusters.find((c) => c.name == job.cluster);
const subCluster = cluster.subClusters.find((sc) => sc.name == job.subCluster);
return subCluster.metricConfig.find((smc) => smc.name == m)?.scope === "accelerator";
});
const pendingScopes = ["node"]
if (job.numNodes === 1) {
pendingScopes.push("socket")
pendingScopes.push("core")
pendingScopes.push("hwthread")
if (accScopeDefault) { pendingScopes.push("accelerator") }
}
selectedMetrics = [...pendingMetrics];
selectedScopes = [...pendingScopes];
});
</script>
<TabPane tabId="stats" tab="Statistics Table" class="overflow-x-auto" active={tabActive}>
<Row>
<Col class="m-2">
<Button outline on:click={() => (isMetricSelectionOpen = true)} class="px-2" color="primary" style="margin-right:0.5rem">
Select Metrics (Selected {selectedMetrics.length} of {availableMetrics.size} available)
</Button>
{#if job.numNodes > 1}
<Button class="px-2 ml-auto" color="success" outline on:click={() => (loadScopes = !loadScopes)} disabled={loadScopes}>
{#if !loadScopes}
<Icon name="plus-square-fill" style="margin-right:0.25rem"/> Add More Scopes
{:else}
<Icon name="check-square-fill" style="margin-right:0.25rem"/> OK: Scopes Added
{/if}
</Button>
{/if}
</Col>
</Row>
<hr class="mb-1 mt-1"/>
<!-- ROW1: Status-->
{#if $scopedStats.fetching}
<Row>
<Col class="m-3" style="text-align: center;">
<Spinner secondary/>
</Col>
</Row>
{:else if $scopedStats.error}
<Row>
<Col class="m-2">
<Card body color="danger">{$scopedStats.error.message}</Card>
</Col>
</Row>
{:else}
<StatsTable
hosts={job.resources.map((r) => r.hostname).sort()}
data={$scopedStats?.data?.scopedJobStats}
{selectedMetrics}
/>
{/if}
</TabPane>
<MetricSelection
cluster={job.cluster}
subCluster={job.subCluster}
configName="job_view_nodestats_selectedMetrics"
bind:allMetrics={availableMetrics}
bind:metrics={selectedMetrics}
bind:isOpen={isMetricSelectionOpen}
/>

View File

@ -1,178 +0,0 @@
<!--
@component Job-View subcomponent; display table of metric data statistics with selectable scopes
Properties:
- `job Object`: The job object
- `jobMetrics [Object]`: The jobs metricdata
Exported:
- `moreLoaded`: Adds additional scopes requested from Metric.svelte in Job-View
-->
<script>
import { getContext } from "svelte";
import {
Button,
Table,
Input,
InputGroup,
InputGroupText,
Icon,
Row,
Col
} from "@sveltestrap/sveltestrap";
import { maxScope } from "../generic/utils.js";
import StatsTableEntry from "./StatsTableEntry.svelte";
import MetricSelection from "../generic/select/MetricSelection.svelte";
export let job;
export let jobMetrics;
const sortedJobMetrics = [...new Set(jobMetrics.map((m) => m.name))].sort()
const scopesForMetric = (metric) =>
jobMetrics.filter((jm) => jm.name == metric).map((jm) => jm.scope);
let hosts = job.resources.map((r) => r.hostname).sort(),
selectedScopes = {},
sorting = {},
isMetricSelectionOpen = false,
availableMetrics = new Set(),
selectedMetrics = (
getContext("cc-config")[`job_view_nodestats_selectedMetrics:${job.cluster}:${job.subCluster}`] ||
getContext("cc-config")[`job_view_nodestats_selectedMetrics:${job.cluster}`]
) || getContext("cc-config")["job_view_nodestats_selectedMetrics"];
for (let metric of sortedJobMetrics) {
// Not Exclusive or Multi-Node: get maxScope directly (mostly: node)
// -> Else: Load smallest available granularity as default as per availability
const availableScopes = scopesForMetric(metric);
if (job.exclusive != 1 || job.numNodes == 1) {
if (availableScopes.includes("accelerator")) {
selectedScopes[metric] = "accelerator";
} else if (availableScopes.includes("core")) {
selectedScopes[metric] = "core";
} else if (availableScopes.includes("socket")) {
selectedScopes[metric] = "socket";
} else {
selectedScopes[metric] = "node";
}
} else {
selectedScopes[metric] = maxScope(availableScopes);
}
sorting[metric] = {
min: { dir: "up", active: false },
avg: { dir: "up", active: false },
max: { dir: "up", active: false },
};
}
function sortBy(metric, stat) {
let s = sorting[metric][stat];
if (s.active) {
s.dir = s.dir == "up" ? "down" : "up";
} else {
for (let metric in sorting)
for (let stat in sorting[metric]) sorting[metric][stat].active = false;
s.active = true;
}
let series = jobMetrics.find(
(jm) => jm.name == metric && jm.scope == "node",
)?.metric.series;
sorting = { ...sorting };
hosts = hosts.sort((h1, h2) => {
let s1 = series.find((s) => s.hostname == h1)?.statistics;
let s2 = series.find((s) => s.hostname == h2)?.statistics;
if (s1 == null || s2 == null) return -1;
return s.dir != "up" ? s1[stat] - s2[stat] : s2[stat] - s1[stat];
});
}
export function moreLoaded(moreJobMetrics) {
moreJobMetrics.forEach(function (newMetric) {
if (!jobMetrics.some((m) => m.scope == newMetric.scope)) {
jobMetrics = [...jobMetrics, newMetric]
}
});
};
</script>
<Row>
<Col class="m-2">
<Button outline on:click={() => (isMetricSelectionOpen = true)} class="w-auto px-2" color="primary">
Select Metrics (Selected {selectedMetrics.length} of {availableMetrics.size} available)
</Button>
</Col>
</Row>
<hr class="mb-1 mt-1"/>
<Table class="mb-0">
<thead>
<!-- Header Row 1: Selectors -->
<tr>
<th/>
{#each selectedMetrics as metric}
<!-- To Match Row-2 Header Field Count-->
<th colspan={selectedScopes[metric] == "node" ? 3 : 4}>
<InputGroup>
<InputGroupText>
{metric}
</InputGroupText>
<Input type="select" bind:value={selectedScopes[metric]}>
{#each scopesForMetric(metric, jobMetrics) as scope}
<option value={scope}>{scope}</option>
{/each}
</Input>
</InputGroup>
</th>
{/each}
</tr>
<!-- Header Row 2: Fields -->
<tr>
<th>Node</th>
{#each selectedMetrics as metric}
{#if selectedScopes[metric] != "node"}
<th>Id</th>
{/if}
{#each ["min", "avg", "max"] as stat}
<th on:click={() => sortBy(metric, stat)}>
{stat}
{#if selectedScopes[metric] == "node"}
<Icon
name="caret-{sorting[metric][stat].dir}{sorting[metric][stat]
.active
? '-fill'
: ''}"
/>
{/if}
</th>
{/each}
{/each}
</tr>
</thead>
<tbody>
{#each hosts as host (host)}
<tr>
<th scope="col">{host}</th>
{#each selectedMetrics as metric (metric)}
<StatsTableEntry
{host}
{metric}
scope={selectedScopes[metric]}
{jobMetrics}
/>
{/each}
</tr>
{/each}
</tbody>
</Table>
<MetricSelection
cluster={job.cluster}
subCluster={job.subCluster}
configName="job_view_nodestats_selectedMetrics"
bind:allMetrics={availableMetrics}
bind:metrics={selectedMetrics}
bind:isOpen={isMetricSelectionOpen}
/>

View File

@ -40,14 +40,14 @@
const client = getContextClient();
const polarQuery = gql`
query ($dbid: ID!, $selectedMetrics: [String!]!) {
jobMetricStats(id: $dbid, metrics: $selectedMetrics) {
jobStats(id: $dbid, metrics: $selectedMetrics) {
name
stats {
min
avg
max
}
min
avg
max
}
}
}
`;
@ -66,7 +66,7 @@
{:else}
<Polar
{polarMetrics}
polarData={$polarData.data.jobMetricStats}
polarData={$polarData.data.jobStats}
/>
{/if}
</CardBody>

View File

@ -0,0 +1,139 @@
<!--:
@component Job-View subcomponent; display table of metric data statistics with selectable scopes
Properties:
- `data Object`: The data object
- `selectedMetrics [String]`: The selected metrics
- `hosts [String]`: The list of hostnames of this job
-->
<script>
import {
Table,
Input,
InputGroup,
InputGroupText,
Icon,
} from "@sveltestrap/sveltestrap";
import StatsTableEntry from "./StatsTableEntry.svelte";
export let data = [];
export let selectedMetrics = [];
export let hosts = [];
let sorting = {};
let availableScopes = {};
let selectedScopes = {};
const scopesForMetric = (metric) =>
data?.filter((jm) => jm.name == metric)?.map((jm) => jm.scope) || [];
const setScopeForMetric = (metric, scope) =>
selectedScopes[metric] = scope
$: if (data && selectedMetrics) {
for (let metric of selectedMetrics) {
availableScopes[metric] = scopesForMetric(metric);
// Set Initial Selection, but do not use selectedScopes: Skips reactivity
if (availableScopes[metric].includes("accelerator")) {
setScopeForMetric(metric, "accelerator");
} else if (availableScopes[metric].includes("core")) {
setScopeForMetric(metric, "core");
} else if (availableScopes[metric].includes("socket")) {
setScopeForMetric(metric, "socket");
} else {
setScopeForMetric(metric, "node");
}
sorting[metric] = {
min: { dir: "up", active: false },
avg: { dir: "up", active: false },
max: { dir: "up", active: false },
};
}
}
function sortBy(metric, stat) {
let s = sorting[metric][stat];
if (s.active) {
s.dir = s.dir == "up" ? "down" : "up";
} else {
for (let metric in sorting)
for (let stat in sorting[metric]) sorting[metric][stat].active = false;
s.active = true;
}
let stats = data.find(
(d) => d.name == metric && d.scope == "node",
)?.stats || [];
sorting = { ...sorting };
hosts = hosts.sort((h1, h2) => {
let s1 = stats.find((s) => s.hostname == h1)?.data;
let s2 = stats.find((s) => s.hostname == h2)?.data;
if (s1 == null || s2 == null) return -1;
return s.dir != "up" ? s1[stat] - s2[stat] : s2[stat] - s1[stat];
});
}
</script>
<Table class="mb-0">
<thead>
<!-- Header Row 1: Selectors -->
<tr>
<th/>
{#each selectedMetrics as metric}
<!-- To Match Row-2 Header Field Count-->
<th colspan={selectedScopes[metric] == "node" ? 3 : 4}>
<InputGroup>
<InputGroupText>
{metric}
</InputGroupText>
<Input type="select" bind:value={selectedScopes[metric]} disabled={availableScopes[metric].length === 1}>
{#each (availableScopes[metric] || []) as scope}
<option value={scope}>{scope}</option>
{/each}
</Input>
</InputGroup>
</th>
{/each}
</tr>
<!-- Header Row 2: Fields -->
<tr>
<th>Node</th>
{#each selectedMetrics as metric}
{#if selectedScopes[metric] != "node"}
<th>Id</th>
{/if}
{#each ["min", "avg", "max"] as stat}
<th on:click={() => sortBy(metric, stat)}>
{stat}
{#if selectedScopes[metric] == "node"}
<Icon
name="caret-{sorting[metric][stat].dir}{sorting[metric][stat]
.active
? '-fill'
: ''}"
/>
{/if}
</th>
{/each}
{/each}
</tr>
</thead>
<tbody>
{#each hosts as host (host)}
<tr>
<th scope="col">{host}</th>
{#each selectedMetrics as metric (metric)}
<StatsTableEntry
{data}
{host}
{metric}
scope={selectedScopes[metric]}
/>
{/each}
</tr>
{/each}
</tbody>
</Table>

View File

@ -1,11 +1,11 @@
<!--
@component Job-View subcomponent; Single Statistics entry component fpr statstable
@component Job-View subcomponent; Single Statistics entry component for statstable
Properties:
- `host String`: The hostname (== node)
- `metric String`: The metric name
- `scope String`: The selected scope
- `jobMetrics [Object]`: The jobs metricdata
- `data [Object]`: The jobs statsdata
-->
<script>
@ -14,59 +14,61 @@
export let host;
export let metric;
export let scope;
export let jobMetrics;
export let data;
function compareNumbers(a, b) {
return a.id - b.id;
}
function sortByField(field) {
let s = sorting[field];
if (s.active) {
s.dir = s.dir == "up" ? "down" : "up";
} else {
for (let field in sorting) sorting[field].active = false;
s.active = true;
}
sorting = { ...sorting };
series = series.sort((a, b) => {
if (a == null || b == null) return -1;
if (field === "id") {
return s.dir != "up" ? a[field] - b[field] : b[field] - a[field];
} else {
return s.dir != "up"
? a.statistics[field] - b.statistics[field]
: b.statistics[field] - a.statistics[field];
}
});
}
let sorting = {
let entrySorting = {
id: { dir: "down", active: true },
min: { dir: "up", active: false },
avg: { dir: "up", active: false },
max: { dir: "up", active: false },
};
$: series = jobMetrics
.find((jm) => jm.name == metric && jm.scope == scope)
?.metric.series.filter((s) => s.hostname == host && s.statistics != null)
?.sort(compareNumbers);
function compareNumbers(a, b) {
return a.id - b.id;
}
function sortByField(field) {
let s = entrySorting[field];
if (s.active) {
s.dir = s.dir == "up" ? "down" : "up";
} else {
for (let field in entrySorting) entrySorting[field].active = false;
s.active = true;
}
entrySorting = { ...entrySorting };
stats = stats.sort((a, b) => {
if (a == null || b == null) return -1;
if (field === "id") {
return s.dir != "up" ?
a[field].localeCompare(b[field], undefined, {numeric: true, sensitivity: 'base'}) :
b[field].localeCompare(a[field], undefined, {numeric: true, sensitivity: 'base'})
} else {
return s.dir != "up"
? a.data[field] - b.data[field]
: b.data[field] - a.data[field];
}
});
}
$: stats = data
?.find((d) => d.name == metric && d.scope == scope)
?.stats.filter((s) => s.hostname == host && s.data != null)
?.sort(compareNumbers) || [];
</script>
{#if series == null || series.length == 0}
{#if stats == null || stats.length == 0}
<td colspan={scope == "node" ? 3 : 4}><i>No data</i></td>
{:else if series.length == 1 && scope == "node"}
{:else if stats.length == 1 && scope == "node"}
<td>
{series[0].statistics.min}
{stats[0].data.min}
</td>
<td>
{series[0].statistics.avg}
{stats[0].data.avg}
</td>
<td>
{series[0].statistics.max}
{stats[0].data.max}
</td>
{:else}
<td colspan="4">
@ -76,19 +78,19 @@
<th on:click={() => sortByField(field)}>
Sort
<Icon
name="caret-{sorting[field].dir}{sorting[field].active
name="caret-{entrySorting[field].dir}{entrySorting[field].active
? '-fill'
: ''}"
/>
</th>
{/each}
</tr>
{#each series as s, i}
{#each stats as s, i}
<tr>
<th>{s.id ?? i}</th>
<td>{s.statistics.min}</td>
<td>{s.statistics.avg}</td>
<td>{s.statistics.max}</td>
<td>{s.data.min}</td>
<td>{s.data.avg}</td>
<td>{s.data.max}</td>
</tr>
{/each}
</table>

View File

@ -205,7 +205,7 @@
</Col>
</Row>
{:else}
{#each nodes as nodeData}
{#each nodes as nodeData (nodeData.host)}
<NodeListRow {nodeData} {cluster} {selectedMetrics}/>
{:else}
<tr>

View File

@ -17,6 +17,9 @@
Input,
InputGroup,
InputGroupText, } from "@sveltestrap/sveltestrap";
import {
scramble,
scrambleNames, } from "../../generic/utils.js";
export let cluster;
export let subCluster
@ -32,8 +35,8 @@
let userList;
let projectList;
$: if (nodeJobsData) {
userList = Array.from(new Set(nodeJobsData.jobs.items.map((j) => j.user))).sort((a, b) => a.localeCompare(b));
projectList = Array.from(new Set(nodeJobsData.jobs.items.map((j) => j.project))).sort((a, b) => a.localeCompare(b));
userList = Array.from(new Set(nodeJobsData.jobs.items.map((j) => scrambleNames ? scramble(j.user) : j.user))).sort((a, b) => a.localeCompare(b));
projectList = Array.from(new Set(nodeJobsData.jobs.items.map((j) => scrambleNames ? scramble(j.project) : j.project))).sort((a, b) => a.localeCompare(b));
}
</script>

View File

@ -14,7 +14,7 @@
getContextClient,
} from "@urql/svelte";
import { Card, CardBody, Spinner } from "@sveltestrap/sveltestrap";
import { maxScope, checkMetricDisabled } from "../../generic/utils.js";
import { maxScope, checkMetricDisabled, scramble, scrambleNames } from "../../generic/utils.js";
import MetricPlot from "../../generic/plots/MetricPlot.svelte";
import NodeInfo from "./NodeInfo.svelte";
@ -110,9 +110,12 @@
extendedLegendData = {}
for (const accId of accSet) {
const matchJob = $nodeJobsData.data.jobs.items.find((i) => i.resources.find((r) => r.accelerators.includes(accId)))
const matchUser = matchJob?.user ? matchJob.user : null
extendedLegendData[accId] = {
user: matchJob?.user ? matchJob?.user : '-',
job: matchJob?.jobId ? matchJob?.jobId : '-',
user: (scrambleNames && matchUser)
? scramble(matchUser)
: (matchUser ? matchUser : '-'),
job: matchJob?.jobId ? matchJob.jobId : '-',
}
}
// Theoretically extendable for hwthreadIDs

View File

@ -0,0 +1,16 @@
import {} from './header.entrypoint.js'
import Tags from './Tags.root.svelte'
new Tags({
target: document.getElementById('svelte-app'),
props: {
username: username,
isAdmin: isAdmin,
tagmap: tagmap,
},
context: new Map([
['cc-config', clusterCockpitConfig]
])
})

View File

@ -1,37 +1,15 @@
{{define "content"}}
<div class="container">
<div class="row justify-content-center">
<div class="col-10">
{{ range $tagType, $tagList := .Infos.tagmap }}
<div class="my-3 p-2 bg-secondary rounded text-white"> <!-- text-capitalize -->
Tag Type: <b>{{ $tagType }}</b>
<span style="float: right; padding-bottom: 0.4rem; padding-top: 0.4rem;" class="badge bg-light text-secondary">
{{len $tagList}} Tag{{if ne (len $tagList) 1}}s{{end}}
</span>
</div>
{{ range $tagList }}
{{if eq .scope "global"}}
<a class="btn btn-outline-secondary" href="/monitoring/jobs/?tag={{ .id }}" role="button">
{{ .name }}
<span class="badge bg-primary mr-1">{{ .count }} Job{{if ne .count 1}}s{{end}}</span>
<span style="background-color:#c85fc8;" class="badge text-dark">Global</span>
</a>
{{else if eq .scope "admin"}}
<a class="btn btn-outline-secondary" href="/monitoring/jobs/?tag={{ .id }}" role="button">
{{ .name }}
<span class="badge bg-primary mr-1">{{ .count }} Job{{if ne .count 1}}s{{end}}</span>
<span style="background-color:#19e5e6;" class="badge text-dark">Admin</span>
</a>
{{else}}
<a class="btn btn-outline-secondary" href="/monitoring/jobs/?tag={{ .id }}" role="button">
{{ .name }}
<span class="badge bg-primary mr-1">{{ .count }} Job{{if ne .count 1}}s{{end}}</span>
<span class="badge bg-warning text-dark">Private</span>
</a>
{{end}}
{{end}}
{{end}}
</div>
</div>
</div>
<div id="svelte-app"></div>
{{end}}
{{define "stylesheets"}}
<link rel='stylesheet' href='/build/taglist.css'>
{{end}}
{{define "javascript"}}
<script>
const username = {{ .User.Username }};
const isAdmin = {{ .User.HasRole .Roles.admin }};
const tagmap = {{ .Infos.tagmap }};
const clusterCockpitConfig = {{ .Config }};
</script>
<script src='/build/taglist.js'></script>
{{end}}

View File

@ -26,8 +26,7 @@ var frontendFiles embed.FS
func ServeFiles() http.Handler {
publicFiles, err := fs.Sub(frontendFiles, "frontend/public")
if err != nil {
log.Fatalf("WEB/WEB > cannot find frontend public files")
panic(err)
log.Abortf("Serve Files: Could not find 'frontend/public' file directory.\nError: %s\n", err.Error())
}
return http.FileServer(http.FS(publicFiles))
}
@ -75,8 +74,7 @@ func init() {
templates[strings.TrimPrefix(path, "templates/")] = template.Must(template.Must(base.Clone()).ParseFS(templateFiles, path))
return nil
}); err != nil {
log.Fatalf("WEB/WEB > cannot find frontend template files")
panic(err)
log.Abortf("Web init(): Could not find frontend template files.\nError: %s\n", err.Error())
}
_ = base