diff --git a/.gitignore b/.gitignore index c98c504..6340afc 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,4 @@ -# executable: -cc-backend +/cc-backend /var/job-archive /var/*.db @@ -8,3 +7,5 @@ cc-backend /.env /config.json +/web/frontend/public/build +/web/frontend/node_modules diff --git a/.gitmodules b/.gitmodules deleted file mode 100644 index 4a8b121..0000000 --- a/.gitmodules +++ /dev/null @@ -1,5 +0,0 @@ -[submodule "frontend"] - path = frontend - url = git@github.com:ClusterCockpit/cc-frontend.git - branch = main - update = merge diff --git a/README.md b/README.md index 75b5f26..b6a9670 100644 --- a/README.md +++ b/README.md @@ -3,21 +3,27 @@ [![Build](https://github.com/ClusterCockpit/cc-backend/actions/workflows/test.yml/badge.svg)](https://github.com/ClusterCockpit/cc-backend/actions/workflows/test.yml) This is a Golang backend implementation for a REST and GraphQL API according to the [ClusterCockpit specifications](https://github.com/ClusterCockpit/cc-specifications). -It also includes a web interface for ClusterCockpit based on the components implemented in -[cc-frontend](https://github.com/ClusterCockpit/cc-frontend), which is included as a git submodule. +It also includes a web interface for ClusterCockpit. This implementation replaces the previous PHP Symfony based ClusterCockpit web-interface. +[Here](https://github.com/ClusterCockpit/ClusterCockpit/wiki/Why-we-switched-from-PHP-Symfony-to-a-Golang-based-solution) is a discussion of the reasons why we switched from PHP Symfony to a Golang based solution. ## Overview This is a golang web backend for the ClusterCockpit job-specific performance monitoring framework. It provides a REST API for integrating ClusterCockpit with a HPC cluster batch system and external analysis scripts. Data exchange between the web frontend and backend is based on a GraphQL API. -The web frontend is also served by the backend using [Svelte](https://svelte.dev/) components implemented in [cc-frontend](https://github.com/ClusterCockpit/cc-frontend). +The web frontend is also served by the backend using [Svelte](https://svelte.dev/) components. Layout and styling is based on [Bootstrap 5](https://getbootstrap.com/) using [Bootstrap Icons](https://icons.getbootstrap.com/). -The backend uses [SQLite 3](https://sqlite.org/) as relational SQL database by default. It can optionally use a MySQL/MariaDB database server. -Finished batch jobs are stored in a so called job archive following [this specification](https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive). +The backend uses [SQLite 3](https://sqlite.org/) as relational SQL database by default. +It can optionally use a MySQL/MariaDB database server. +Finished batch jobs are stored in a file-based job archive following [this specification](https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive). The backend supports authentication using local accounts or an external LDAP directory. -Authorization for APIs is implemented using [JWT](https://jwt.io/) tokens created with public/private key encryption. +Authorization for APIs is implemented using [JWT](https://jwt.io/) tokens created with public/private key encryption. + +You find more detailed information here: +* `./configs/README.md`: Infos about configuration and setup of cc-backend. +* `./init/README.md`: Infos on how to setup cc-backend as systemd service on Linux. +* `./tools/README.md`: Infos on the JWT authorizatin token workflows in ClusterCockpit. ## Demo Setup @@ -25,30 +31,28 @@ We provide a shell skript that downloads demo data and automatically builds and You need `wget`, `go`, and `yarn` in your path to start the demo. The demo will download 32MB of data (223MB on disk). ```sh -# The frontend is a submodule, so use `--recursive` -git clone --recursive git@github.com:ClusterCockpit/cc-backend.git +git clone git@github.com:ClusterCockpit/cc-backend.git ./startDemo.sh ``` -You can access the web interface at http://localhost:8080. Credentials for login: `demo:AdminDev`. Please note that some views do not work without a metric backend (e.g., the Systems view). +You can access the web interface at http://localhost:8080. +Credentials for login: `demo:AdminDev`. +Please note that some views do not work without a metric backend (e.g., the Systems and Status view). ## Howto Build and Run ```sh -# The frontend is a submodule, so use `--recursive` -git clone --recursive git@github.com:ClusterCockpit/cc-backend.git +git clone git@github.com:ClusterCockpit/cc-backend.git # Prepare frontend -cd ./cc-backend/frontend +cd ./cc-backend/web/frontend yarn install yarn build cd .. go get -go build +go build cmd/cc-backend -# The job-archive directory must be organised the same way as -# as for the regular ClusterCockpit. ln -s ./var/job-archive # Create empty job.db (Will be initialized as SQLite3 database) @@ -56,6 +60,7 @@ touch ./var/job.db # EDIT THE .env FILE BEFORE YOU DEPLOY (Change the secrets)! # If authentication is disabled, it can be empty. +cp configs/env-template.txt .env vim ./.env # This will first initialize the job.db database by traversing all @@ -71,57 +76,40 @@ vim ./.env ``` ### Run as systemd daemon -In order to run this program as a daemon, look at [utils/systemd/README.md](./utils/systemd/README.md) where a systemd unit file and more explanation is provided. +In order to run this program as a daemon, cc-backend ships with an [example systemd setup](./init/README.md). ## Configuration and Setup -cc-backend can be used as a local web-interface for an existing job archive or -as a general web-interface server for a live ClusterCockpit Monitoring -framework. +cc-backend can be used as a local web-interface for an existing job archive or as a general web-interface server for a live ClusterCockpit Monitoring framework. -Create your job-archive according to [this specification](https://github.com/ClusterCockpit/cc-specifications). At least -one cluster with a valid `cluster.json` file is required. Having no jobs in the -job-archive at all is fine. You may use the sample job-archive available for -download [in cc-docker/develop](https://github.com/ClusterCockpit/cc-docker/tree/develop). +Create your job-archive according to [this specification](https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive). +At least one cluster with a valid `cluster.json` file is required. +Having no jobs in the job-archive at all is fine. ### Configuration A config file in the JSON format can be provided using `--config` to override the defaults. -Look at the beginning of `server.go` for the defaults and consequently the format of the configuration file. +You find documentation of all supported configuration and command line options [here](./configs.README.md). ### Update GraphQL schema -This project uses [gqlgen](https://github.com/99designs/gqlgen) for the GraphQL -API. The schema can be found in `./graph/schema.graphqls`. After changing it, -you need to run `go run github.com/99designs/gqlgen` which will update -`graph/model`. In case new resolvers are needed, they will be inserted into -`graph/schema.resolvers.go`, where you will need to implement them. +This project uses [gqlgen](https://github.com/99designs/gqlgen) for the GraphQL API. +The schema can be found in `./api/schema.graphqls`. +After changing it, you need to run `go run github.com/99designs/gqlgen` which will update `./internal/graph/model`. +In case new resolvers are needed, they will be inserted into `./internal/graph/schema.resolvers.go`, where you will need to implement them. ## Project Structure -- `api/` contains the REST API. The routes defined there should be called whenever a job starts/stops. The API is documented in the OpenAPI 3.0 format in [./api/openapi.yaml](./api/openapi.yaml). -- `auth/` is where the (optional) authentication middleware can be found, which adds the currently authenticated user to the request context. The `user` table is created and managed here as well. - - `auth/ldap.go` contains everything to do with automatically syncing and authenticating users form an LDAP server. -- `config` handles the `cluster.json` files and the user-specific configurations (changeable via GraphQL) for the Web-UI such as the selected metrics etc. -- `frontend` is a submodule, this is where the Svelte based frontend resides. -- `graph/generated` should *not* be touched. -- `graph/model` contains all types defined in the GraphQL schema not manually defined in `schema/`. Manually defined types have to be listed in `gqlgen.yml`. -- `graph/schema.graphqls` contains the GraphQL schema. Whenever you change it, you should call `go run github.com/99designs/gqlgen`. -- `graph/` contains the resolvers and handlers for the GraphQL API. Function signatures in `graph/schema.resolvers.go` are automatically generated. -- `metricdata/` handles getting and archiving the metrics associated with a job. - - `metricdata/metricdata.go` defines the interface `MetricDataRepository` and provides functions to the GraphQL and REST API for accessing a jobs metrics which automatically take care of selecting the source for the metrics (the archive or one of the metric data repositories). - - `metricdata/archive.go` provides functions for fetching metrics from the job-archive and archiving a job to the job-archive. - - `metricdata/cc-metric-store.go` contains an implementation of the `MetricDataRepository` interface which can fetch data from an [cc-metric-store](https://github.com/ClusterCockpit/cc-metric-store) - - `metricdata/influxdb-v2` contains an implementation of the `MetricDataRepository` interface which can fetch data from an InfluxDBv2 database. It is currently disabled and out of date and can not be used as of writing. -- `repository/` all SQL related stuff. -- `repository/init.go` initializes the `job` (and `tag` and `jobtag`) table if the `--init-db` flag is provided. Not only is the table created in the correct schema, but the job-archive is traversed as well. -- `schema/` contains type definitions used all over this project extracted in this package as Go disallows cyclic dependencies between packages. - - `schema/float.go` contains a custom `float64` type which overwrites JSON and GraphQL Marshaling/Unmarshalling. This is needed because a regular optional `Float` in GraphQL will map to `*float64` types in Go. Wrapping every single metric value in an allocation would be a lot of overhead. - - `schema/job.go` provides the types representing a job and its resources. Those can be used as type for a `meta.json` file and/or a row in the `job` table. -- `templates/` is mostly full of HTML templates and a small helper go module. -- `utils/systemd` describes how to deploy/install this as a systemd service -- `test/` rudimentery tests. -- `utils/` -- `.env` *must* be changed before you deploy this. It contains a Base64 encoded [Ed25519](https://en.wikipedia.org/wiki/EdDSA) key-pair, the secret used for sessions and the password to the LDAP server if LDAP authentication is enabled. +- `api/` contains the API schema files for the REST and GraphQL APIs. The REST API is documented in the OpenAPI 3.0 format in [./api/openapi.yaml](./api/openapi.yaml). +- `cmd/cc-backend` contains `main.go` for the main application. +- `configs/` contains documentation about configuration and command line options and required environment variables. An example configuration file is provided. +- `init/` contains an example systemd setup for production use. +- `internal/` contains library source code that is not intended to be used by others. +- `pkg/` contains go packages that can also be used by other projects. +- `test/` Test apps and test data. +- `tools/` contains supporting tools for cc-backend. At the moment this is a small application to generate a compatible JWT keypair includin a README about JWT setup in ClusterCockpit. +- `web/` Server side templates and frontend related files: + - `templates` Serverside go templates + - `frontend` Svelte components and static assets for frontend UI - `gqlgen.yml` configures the behaviour and generation of [gqlgen](https://github.com/99designs/gqlgen). -- `server.go` contains the main function and starts the actual http server. +- `startDemo.sh` is a shell script that sets up demo data, and builds and starts cc-backend. diff --git a/graph/schema.graphqls b/api/schema.graphqls similarity index 100% rename from graph/schema.graphqls rename to api/schema.graphqls diff --git a/server.go b/cmd/cc-backend/main.go similarity index 88% rename from server.go rename to cmd/cc-backend/main.go index 5138a1b..f635961 100644 --- a/server.go +++ b/cmd/cc-backend/main.go @@ -22,26 +22,25 @@ import ( "github.com/99designs/gqlgen/graphql/handler" "github.com/99designs/gqlgen/graphql/playground" - "github.com/ClusterCockpit/cc-backend/api" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/graph" - "github.com/ClusterCockpit/cc-backend/graph/generated" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/repository" - "github.com/ClusterCockpit/cc-backend/templates" + "github.com/ClusterCockpit/cc-backend/internal/api" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/graph" + "github.com/ClusterCockpit/cc-backend/internal/graph/generated" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/internal/repository" + "github.com/ClusterCockpit/cc-backend/internal/routerConfig" + "github.com/ClusterCockpit/cc-backend/internal/runtimeEnv" + "github.com/ClusterCockpit/cc-backend/internal/templates" + "github.com/ClusterCockpit/cc-backend/pkg/log" "github.com/google/gops/agent" "github.com/gorilla/handlers" "github.com/gorilla/mux" - "github.com/jmoiron/sqlx" _ "github.com/go-sql-driver/mysql" _ "github.com/mattn/go-sqlite3" ) -var jobRepo *repository.JobRepository - // Format of the configurartion (file). See below for the defaults. type ProgramConfig struct { // Address where the http (or https) server will listen on (for example: 'localhost:80'). @@ -70,7 +69,7 @@ type ProgramConfig struct { // do not write to the job-archive. DisableArchive bool `json:"disable-archive"` - // For LDAP Authentication and user syncronisation. + // For LDAP Authentication and user synchronisation. LdapConfig *auth.LdapConfig `json:"ldap"` // Specifies for how long a session or JWT shall be valid @@ -94,14 +93,14 @@ type ProgramConfig struct { // Where to store MachineState files MachineStateDir string `json:"machine-state-dir"` - // If not zero, automatically mark jobs as stopped running X seconds longer than theire walltime. + // If not zero, automatically mark jobs as stopped running X seconds longer than their walltime. StopJobsExceedingWalltime int `json:"stop-jobs-exceeding-walltime"` } var programConfig ProgramConfig = ProgramConfig{ Addr: ":8080", DisableAuthentication: false, - StaticFiles: "./frontend/public", + StaticFiles: "./web/frontend/public", DBDriver: "sqlite3", DB: "./var/job.db", JobArchive: "./var/job-archive", @@ -119,7 +118,7 @@ var programConfig ProgramConfig = ProgramConfig{ "plot_general_colorscheme": []string{"#00bfff", "#0000ff", "#ff00ff", "#ff0000", "#ff8000", "#ffff00", "#80ff00"}, "plot_general_lineWidth": 3, "plot_list_hideShortRunningJobs": 5 * 60, - "plot_list_jobsPerPage": 10, + "plot_list_jobsPerPage": 50, "plot_list_selectedMetrics": []string{"cpu_load", "ipc", "mem_used", "flops_any", "mem_bw"}, "plot_view_plotsPerRow": 3, "plot_view_showPolarplot": true, @@ -127,7 +126,7 @@ var programConfig ProgramConfig = ProgramConfig{ "plot_view_showStatTable": true, "system_view_selectedMetric": "cpu_load", }, - StopJobsExceedingWalltime: -1, + StopJobsExceedingWalltime: 0, } func main() { @@ -152,7 +151,7 @@ func main() { } } - if err := loadEnv("./.env"); err != nil && !os.IsNotExist(err) { + if err := runtimeEnv.LoadEnv("./.env"); err != nil && !os.IsNotExist(err) { log.Fatalf("parsing './.env' file failed: %s", err.Error()) } @@ -178,28 +177,8 @@ func main() { } var err error - var db *sqlx.DB - if programConfig.DBDriver == "sqlite3" { - db, err = sqlx.Open("sqlite3", fmt.Sprintf("%s?_foreign_keys=on", programConfig.DB)) - if err != nil { - log.Fatal(err) - } - - // sqlite does not multithread. Having more than one connection open would just mean - // waiting for locks. - db.SetMaxOpenConns(1) - } else if programConfig.DBDriver == "mysql" { - db, err = sqlx.Open("mysql", fmt.Sprintf("%s?multiStatements=true", programConfig.DB)) - if err != nil { - log.Fatal(err) - } - - db.SetConnMaxLifetime(time.Minute * 3) - db.SetMaxOpenConns(10) - db.SetMaxIdleConns(10) - } else { - log.Fatalf("unsupported database driver: %s", programConfig.DBDriver) - } + repository.Connect(programConfig.DBDriver, programConfig.DB) + db := repository.GetConnection() // Initialize sub-modules and handle all command line flags. // The order here is important! For example, the metricdata package @@ -215,7 +194,7 @@ func main() { authentication.JwtMaxAge = d } - if err := authentication.Init(db, programConfig.LdapConfig); err != nil { + if err := authentication.Init(db.DB, programConfig.LdapConfig); err != nil { log.Fatal(err) } @@ -257,7 +236,7 @@ func main() { log.Fatal("arguments --add-user and --del-user can only be used if authentication is enabled") } - if err := config.Init(db, !programConfig.DisableAuthentication, programConfig.UiDefaults, programConfig.JobArchive); err != nil { + if err := config.Init(db.DB, !programConfig.DisableAuthentication, programConfig.UiDefaults, programConfig.JobArchive); err != nil { log.Fatal(err) } @@ -266,15 +245,12 @@ func main() { } if flagReinitDB { - if err := repository.InitDB(db, programConfig.JobArchive); err != nil { + if err := repository.InitDB(db.DB, programConfig.JobArchive); err != nil { log.Fatal(err) } } - jobRepo = &repository.JobRepository{DB: db} - if err := jobRepo.Init(); err != nil { - log.Fatal(err) - } + jobRepo := repository.GetRepository() if flagImportJob != "" { if err := jobRepo.HandleImportFlag(flagImportJob); err != nil { @@ -288,7 +264,7 @@ func main() { // Setup the http.Handler/Router used by the server - resolver := &graph.Resolver{DB: db, Repo: jobRepo} + resolver := &graph.Resolver{DB: db.DB, Repo: jobRepo} graphQLEndpoint := handler.NewDefaultServer(generated.NewExecutableSchema(generated.Config{Resolvers: resolver})) if os.Getenv("DEBUG") != "1" { // Having this handler means that a error message is returned via GraphQL instead of the connection simply beeing closed. @@ -394,7 +370,7 @@ func main() { }) // Mount all /monitoring/... and /api/... routes. - setupRoutes(secured, routes) + routerConfig.SetupRoutes(secured) api.MountRoutes(secured) r.PathPrefix("/").Handler(http.FileServer(http.Dir(programConfig.StaticFiles))) @@ -461,7 +437,7 @@ func main() { // Because this program will want to bind to a privileged port (like 80), the listener must // be established first, then the user can be changed, and after that, // the actuall http server can be started. - if err := dropPrivileges(); err != nil { + if err := runtimeEnv.DropPrivileges(programConfig.Group, programConfig.User); err != nil { log.Fatalf("error while changing user: %s", err.Error()) } @@ -479,7 +455,7 @@ func main() { go func() { defer wg.Done() <-sigs - systemdNotifiy(false, "shutting down") + runtimeEnv.SystemdNotifiy(false, "shutting down") // First shut down the server gracefully (waiting for all ongoing requests) server.Shutdown(context.Background()) @@ -503,7 +479,7 @@ func main() { if os.Getenv("GOGC") == "" { debug.SetGCPercent(25) } - systemdNotifiy(true, "running") + runtimeEnv.SystemdNotifiy(true, "running") wg.Wait() log.Print("Gracefull shutdown completed!") } diff --git a/configs/README.md b/configs/README.md new file mode 100644 index 0000000..633ec54 --- /dev/null +++ b/configs/README.md @@ -0,0 +1,56 @@ +## Intro + +cc-backend can be used without a configuration file. In this case the default +options documented below are used. To overwrite the defaults specify a json +config file location using the command line option `--config `. +All security relevant configuration. e.g., keys and passwords, are set using environment variables. It is supported to specify these by means of an `.env` file located in the project root. + +## Configuration Options +* `addr`: Type string. Address where the http (or https) server will listen on (for example: 'localhost:80'). Default `:8080`. +* `user`: Type string. Drop root permissions once .env was read and the port was taken. Only applicable if using privileged port. +* `group`: Type string. Drop root permissions once .env was read and the port was taken. Only applicable if using privileged port. +* `disable-authentication`: Type bool. Disable authentication (for everything: API, Web-UI, ...). Default `false`. +* `static-files`: Type string. Folder where static assets can be found, those will be served directly. Default `./frontend/public`. +* `db-driver`: Type string. 'sqlite3' or 'mysql' (mysql will work for mariadb as well). Default `sqlite3`. +* `db`: Type string. For sqlite3 a filename, for mysql a DSN in this format: https://github.com/go-sql-driver/mysql#dsn-data-source-name (Without query parameters!). Default: `./var/job.db`. +* `job-archive`: Type string. Path to the job-archive. Default: `./var/job-archive`. +* `disable-archive`: Type bool. Keep all metric data in the metric data repositories, do not write to the job-archive. Default `false`. +* `"session-max-age`: Type string. Specifies for how long a session shall be valid as a string parsable by time.ParseDuration(). If 0 or empty, the session/token does not expire! Default `168h`. +* `"jwt-max-age`: Type string. Specifies for how long a JWT token shall be valid as a string parsable by time.ParseDuration(). If 0 or empty, the session/token does not expire! Default `0`. +* `https-cert-file` and `https-key-file`: Type string. If both those options are not empty, use HTTPS using those certificates. +* `redirect-http-to`: Type string. If not the empty string and `addr` does not end in ":80", redirect every request incoming at port 80 to that url. +* `machine-state-dir`: Type string. Where to store MachineState files. TODO: Explain in more detail! +* `"stop-jobs-exceeding-walltime`: Type int. If not zero, automatically mark jobs as stopped running X seconds longer than their walltime. Only applies if walltime is set for job. Default `0`; +* `ldap`: Type object. For LDAP Authentication and user synchronisation. Default `nil`. + - `url`: Type string. URL of LDAP directory server. + - `user_base`: Type string. Base DN of user tree root. + - `search_dn`: Type string. DN for authenticating LDAP admin account with fgeneral read rights. + - `user_bind`: Type string. Expression used to authenticate users via LDAP bind. Must contain `uid={username}`. + - `user_filter`: Type string. Filter to extract users for syncing. + - `sync_interval`: Type string. Interval used for syncing local user table with LDAP directory. Parsed using time.ParseDuration. + - `sync_del_old_users`: Type bool. Delete obsolete users in database. +* `ui-defaults`: Type object. Default configuration for ui views. If overwriten, all options must be provided! Most options can be overwritten by the user via the web interface. + - `analysis_view_histogramMetrics`: Type string array. Metrics to show as job count histograms in analysis view. Default `["flops_any", "mem_bw", "mem_used"]`. + - `analysis_view_scatterPlotMetrics`: Type array of string array. Initial scatter plto configuration in analysis view. Default `[["flops_any", "mem_bw"], ["flops_any", "cpu_load"], ["cpu_load", "mem_bw"]]`. + - `job_view_nodestats_selectedMetrics`: Type string array. Initial metrics shown in node statistics table of single job view. Default `["flops_any", "mem_bw", "mem_used"]`. + - `job_view_polarPlotMetrics`: Type string array. Metrics shown in polar plot of single job view. Default `["flops_any", "mem_bw", "mem_used", "net_bw", "file_bw"]`. + - `job_view_selectedMetrics`: Type string array. ??. Default `["flops_any", "mem_bw", "mem_used"]`. + - `plot_general_colorBackground`: Type bool. Color plot background according to job average threshold limits. Default `true`. + - `plot_general_colorscheme`: Type string array. Initial color scheme. Default `"#00bfff", "#0000ff", "#ff00ff", "#ff0000", "#ff8000", "#ffff00", "#80ff00"`. + - `plot_general_lineWidth`: Type int. Initial linewidth. Default `3`. + - `plot_list_hideShortRunningJobs`: Type int. Do not show running jobs shorter than X seconds. Default `300`. + - `plot_list_jobsPerPage`: Type int. Jobs shown per page in job lists. Default `50`. + - `plot_list_selectedMetrics`: Type string array. Initial metric plots shown in jobs lists. Default `"cpu_load", "ipc", "mem_used", "flops_any", "mem_bw"`. + - `plot_view_plotsPerRow`: Type int. Number of plots per row in single job view. Default `3`. + - `plot_view_showPolarplot`: Type bool. Option to toggle polar plot in single job view. Default `true`. + - `plot_view_showRoofline`: Type bool. Option to toggle roofline plot in single job view. Default `true`. + - `plot_view_showStatTable`: Type bool. Option to toggle the node statistic table in single job view. Default `true`. + - `system_view_selectedMetric`: Type string. Initial metric shown in system view. Default `cpu_load`. + +## Environment Variables + +An example env file is found in this directory. Copy it to `.env` in the project root and adapt it for your needs. + +* `JWT_PUBLIC_KEY` and `JWT_PRIVATE_KEY`: Base64 encoded Ed25519 keys used for JSON Web Token (JWT) authentication . TODO: Details! You can generate your own keypair using `go run utils/gen-keypair.go` +* `SESSION_KEY`: Some random bytes used as secret for cookie-based sessions. +* `LDAP_ADMIN_PASSWORD`: The LDAP admin user password (optional). diff --git a/configs/config.json b/configs/config.json new file mode 100644 index 0000000..be90151 --- /dev/null +++ b/configs/config.json @@ -0,0 +1,14 @@ +{ + "addr": "0.0.0.0:443", + "ldap": { + "url": "ldaps://hpcldap.rrze.uni-erlangen.de", + "user_base": "ou=people,ou=hpc,dc=rrze,dc=uni-erlangen,dc=de", + "search_dn": "cn=hpcmonitoring,ou=roadm,ou=profile,ou=hpc,dc=rrze,dc=uni-erlangen,dc=de", + "user_bind": "uid={username},ou=people,ou=hpc,dc=rrze,dc=uni-erlangen,dc=de", + "user_filter": "(&(objectclass=posixAccount)(uid=*))" + }, + "https-cert-file": "/etc/letsencrypt/live/monitoring.nhr.fau.de/fullchain.pem", + "https-key-file": "/etc/letsencrypt/live/monitoring.nhr.fau.de/privkey.pem", + "user": "clustercockpit", + "group": "clustercockpit" +} diff --git a/.env b/configs/env-template.txt similarity index 100% rename from .env rename to configs/env-template.txt diff --git a/frontend b/frontend deleted file mode 160000 index 94ef11a..0000000 --- a/frontend +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 94ef11aa9fc3c194f1df497e3e06c60a7125883d diff --git a/go.mod b/go.mod index c149ea3..6e963db 100644 --- a/go.mod +++ b/go.mod @@ -12,7 +12,6 @@ require ( github.com/gorilla/handlers v1.5.1 github.com/gorilla/mux v1.8.0 github.com/gorilla/sessions v1.2.1 - github.com/iamlouk/lrucache v0.2.1 github.com/influxdata/influxdb-client-go/v2 v2.8.1 github.com/jmoiron/sqlx v1.3.4 github.com/mattn/go-sqlite3 v1.14.12 diff --git a/gqlgen.yml b/gqlgen.yml index f02ce61..830edd2 100644 --- a/gqlgen.yml +++ b/gqlgen.yml @@ -1,10 +1,10 @@ # Where are all the schema files located? globs are supported eg src/**/*.graphqls schema: - - graph/*.graphqls + - api/*.graphqls # Where should the generated server code go? exec: - filename: graph/generated/generated.go + filename: internal/graph/generated/generated.go package: generated # Uncomment to enable federation @@ -14,7 +14,7 @@ exec: # Where should any generated models go? model: - filename: graph/model/models_gen.go + filename: internal/graph/model/models_gen.go package: model # Where should the resolver implementations go? @@ -75,5 +75,3 @@ models: Series: { model: "github.com/ClusterCockpit/cc-backend/schema.Series" } MetricStatistics: { model: "github.com/ClusterCockpit/cc-backend/schema.MetricStatistics" } StatsSeries: { model: "github.com/ClusterCockpit/cc-backend/schema.StatsSeries" } - - diff --git a/init/README.md b/init/README.md new file mode 100644 index 0000000..e867798 --- /dev/null +++ b/init/README.md @@ -0,0 +1,38 @@ +# How to run this as a systemd deamon + +The files in this directory assume that you install ClusterCockpit to `/opt/monitoring`. +Of course you can choose any other location, but make sure to replace all paths that begin with `/opt/monitoring` in the `clustercockpit.service` file! + +If you have not installed [yarn](https://yarnpkg.com/getting-started/install) and [go](https://go.dev/doc/install) already, do that (Golang is available in most package managers). +It is recommended and easy to install the most recent stable version of Golang as every version also improves the Golang standard library. + +The `config.json` can have the optional fields *user* and *group*. +If provided, the application will call [setuid](https://man7.org/linux/man-pages/man2/setuid.2.html) and [setgid](https://man7.org/linux/man-pages/man2/setgid.2.html) after having read the config file and having bound to a TCP port (so that it can take a privileged port), but before it starts accepting any connections. +This is good for security, but means that the directories `web/frontend/public`, `var/` and `web/templates/` must be readable by that user and `var/` writable as well (All paths relative to the repos root). +The `.env` and `config.json` files might contain secrets and should not be readable by that user. +If those files are changed, the server has to be restarted. + +```sh +# 1.: Clone this repository to /opt/monitoring +git clone git@github.com:ClusterCockpit/cc-backend.git /opt/monitoring + +# 2.: Install all dependencies and build everything +cd /mnt/monitoring +go get && go build cmd/cc-backend && (cd ./web/frontend && yarn install && yarn build) + +# 3.: Modify the `./config.json` and env-template.txt file from the configs directory to your liking and put it in the repo root +cp ./configs/config.json ./config.json +cp ./configs/env-template.txt ./.env +vim ./config.json # do your thing... +vim ./.env # do your thing... + +# 4.: Add the systemd service unit file (in case /opt/ is mounted on another file system it may be better to copy the file to /etc) +sudo ln -s /mnt/monitoring/init/clustercockpit.service /etc/systemd/system/clustercockpit.service + +# 5.: Enable and start the server +sudo systemctl enable clustercockpit.service # optional (if done, (re-)starts automatically) +sudo systemctl start clustercockpit.service + +# Check whats going on: +sudo journalctl -u clustercockpit.service +``` diff --git a/init/clustercockpit.service b/init/clustercockpit.service new file mode 100644 index 0000000..53fc429 --- /dev/null +++ b/init/clustercockpit.service @@ -0,0 +1,18 @@ +[Unit] +Description=ClusterCockpit Web Server (Go edition) +Documentation=https://github.com/ClusterCockpit/cc-backend +Wants=network-online.target +After=network-online.target +After=mariadb.service mysql.service + +[Service] +WorkingDirectory=/opt/monitoring/cc-backend +Type=notify +NotifyAccess=all +Restart=on-failure +RestartSec=30 +TimeoutStopSec=100 +ExecStart=/opt/monitoring/cc-backend/cc-backend --config ./config.json + +[Install] +WantedBy=multi-user.target diff --git a/api/rest.go b/internal/api/rest.go similarity index 97% rename from api/rest.go rename to internal/api/rest.go index 83a71f3..b561b18 100644 --- a/api/rest.go +++ b/internal/api/rest.go @@ -16,14 +16,14 @@ import ( "sync" "time" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/graph" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/repository" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/graph" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/internal/repository" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/schema" "github.com/gorilla/mux" ) diff --git a/auth/auth.go b/internal/auth/auth.go similarity index 99% rename from auth/auth.go rename to internal/auth/auth.go index 0a99976..3fa6f1d 100644 --- a/auth/auth.go +++ b/internal/auth/auth.go @@ -14,8 +14,8 @@ import ( "strings" "time" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/log" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/pkg/log" sq "github.com/Masterminds/squirrel" "github.com/golang-jwt/jwt/v4" "github.com/gorilla/sessions" diff --git a/auth/ldap.go b/internal/auth/ldap.go similarity index 98% rename from auth/ldap.go rename to internal/auth/ldap.go index 4c5e0d5..8a4f40b 100644 --- a/auth/ldap.go +++ b/internal/auth/ldap.go @@ -6,8 +6,7 @@ import ( "strings" "time" - "github.com/ClusterCockpit/cc-backend/log" - + "github.com/ClusterCockpit/cc-backend/pkg/log" "github.com/go-ldap/ldap/v3" ) diff --git a/config/config.go b/internal/config/config.go similarity index 96% rename from config/config.go rename to internal/config/config.go index 76adeb0..19d2ec6 100644 --- a/config/config.go +++ b/internal/config/config.go @@ -11,10 +11,10 @@ import ( "sync" "time" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/schema" - "github.com/iamlouk/lrucache" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/pkg/lrucache" + "github.com/ClusterCockpit/cc-backend/pkg/schema" "github.com/jmoiron/sqlx" ) diff --git a/config/nodelist.go b/internal/config/nodelist.go similarity index 98% rename from config/nodelist.go rename to internal/config/nodelist.go index fb823df..715f55a 100644 --- a/config/nodelist.go +++ b/internal/config/nodelist.go @@ -5,7 +5,7 @@ import ( "strconv" "strings" - "github.com/ClusterCockpit/cc-backend/log" + "github.com/ClusterCockpit/cc-backend/pkg/log" ) type NodeList [][]interface { diff --git a/config/nodelist_test.go b/internal/config/nodelist_test.go similarity index 100% rename from config/nodelist_test.go rename to internal/config/nodelist_test.go diff --git a/graph/generated/generated.go b/internal/graph/generated/generated.go similarity index 99% rename from graph/generated/generated.go rename to internal/graph/generated/generated.go index e1e5db4..3c62d5d 100644 --- a/graph/generated/generated.go +++ b/internal/graph/generated/generated.go @@ -13,8 +13,8 @@ import ( "github.com/99designs/gqlgen/graphql" "github.com/99designs/gqlgen/graphql/introspection" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/pkg/schema" gqlparser "github.com/vektah/gqlparser/v2" "github.com/vektah/gqlparser/v2/ast" ) diff --git a/graph/model/models.go b/internal/graph/model/models.go similarity index 100% rename from graph/model/models.go rename to internal/graph/model/models.go diff --git a/graph/model/models_gen.go b/internal/graph/model/models_gen.go similarity index 99% rename from graph/model/models_gen.go rename to internal/graph/model/models_gen.go index ca8186e..91263ef 100644 --- a/graph/model/models_gen.go +++ b/internal/graph/model/models_gen.go @@ -8,7 +8,7 @@ import ( "strconv" "time" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) type Accelerator struct { diff --git a/graph/resolver.go b/internal/graph/resolver.go similarity index 81% rename from graph/resolver.go rename to internal/graph/resolver.go index ce08e33..dd7bc3b 100644 --- a/graph/resolver.go +++ b/internal/graph/resolver.go @@ -1,7 +1,7 @@ package graph import ( - "github.com/ClusterCockpit/cc-backend/repository" + "github.com/ClusterCockpit/cc-backend/internal/repository" "github.com/jmoiron/sqlx" ) diff --git a/graph/schema.resolvers.go b/internal/graph/schema.resolvers.go similarity index 95% rename from graph/schema.resolvers.go rename to internal/graph/schema.resolvers.go index 46b4d7f..d8e94b9 100644 --- a/graph/schema.resolvers.go +++ b/internal/graph/schema.resolvers.go @@ -10,12 +10,12 @@ import ( "strconv" "time" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/graph/generated" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/graph/generated" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) func (r *clusterResolver) Partitions(ctx context.Context, obj *model.Cluster) ([]string, error) { diff --git a/graph/stats.go b/internal/graph/stats.go similarity index 96% rename from graph/stats.go rename to internal/graph/stats.go index 52c8443..c3d90c9 100644 --- a/graph/stats.go +++ b/internal/graph/stats.go @@ -9,11 +9,11 @@ import ( "time" "github.com/99designs/gqlgen/graphql" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/repository" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/internal/repository" + "github.com/ClusterCockpit/cc-backend/pkg/schema" sq "github.com/Masterminds/squirrel" ) diff --git a/metricdata/archive.go b/internal/metricdata/archive.go similarity index 98% rename from metricdata/archive.go rename to internal/metricdata/archive.go index 80271f0..80b5298 100644 --- a/metricdata/archive.go +++ b/internal/metricdata/archive.go @@ -13,8 +13,8 @@ import ( "strconv" "time" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) // For a given job, return the path of the `data.json`/`meta.json` file. diff --git a/metricdata/cc-metric-store.go b/internal/metricdata/cc-metric-store.go similarity index 99% rename from metricdata/cc-metric-store.go rename to internal/metricdata/cc-metric-store.go index e26b72f..8deab14 100644 --- a/metricdata/cc-metric-store.go +++ b/internal/metricdata/cc-metric-store.go @@ -11,8 +11,8 @@ import ( "strings" "time" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) type CCMetricStoreConfig struct { diff --git a/internal/metricdata/influxdb-v2.go b/internal/metricdata/influxdb-v2.go new file mode 100644 index 0000000..6a47bbd --- /dev/null +++ b/internal/metricdata/influxdb-v2.go @@ -0,0 +1,308 @@ +package metricdata + +import ( + "context" + "crypto/tls" + "encoding/json" + "errors" + "fmt" + "log" + "strings" + "time" + + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/pkg/schema" + influxdb2 "github.com/influxdata/influxdb-client-go/v2" + influxdb2Api "github.com/influxdata/influxdb-client-go/v2/api" +) + +type InfluxDBv2DataRepositoryConfig struct { + Url string `json:"url"` + Token string `json:"token"` + Bucket string `json:"bucket"` + Org string `json:"org"` + SkipTls bool `json:"skiptls"` +} + +type InfluxDBv2DataRepository struct { + client influxdb2.Client + queryClient influxdb2Api.QueryAPI + bucket, measurement string +} + +func (idb *InfluxDBv2DataRepository) Init(rawConfig json.RawMessage) error { + var config InfluxDBv2DataRepositoryConfig + if err := json.Unmarshal(rawConfig, &config); err != nil { + return err + } + + idb.client = influxdb2.NewClientWithOptions(config.Url, config.Token, influxdb2.DefaultOptions().SetTLSConfig(&tls.Config{InsecureSkipVerify: config.SkipTls})) + idb.queryClient = idb.client.QueryAPI(config.Org) + idb.bucket = config.Bucket + + return nil +} + +func (idb *InfluxDBv2DataRepository) formatTime(t time.Time) string { + return t.Format(time.RFC3339) // Like “2006-01-02T15:04:05Z07:00” +} + +func (idb *InfluxDBv2DataRepository) epochToTime(epoch int64) time.Time { + return time.Unix(epoch, 0) +} + +func (idb *InfluxDBv2DataRepository) LoadData(job *schema.Job, metrics []string, scopes []schema.MetricScope, ctx context.Context) (schema.JobData, error) { + + measurementsConds := make([]string, 0, len(metrics)) + for _, m := range metrics { + measurementsConds = append(measurementsConds, fmt.Sprintf(`r["_measurement"] == "%s"`, m)) + } + measurementsCond := strings.Join(measurementsConds, " or ") + + hostsConds := make([]string, 0, len(job.Resources)) + for _, h := range job.Resources { + if h.HWThreads != nil || h.Accelerators != nil { + // TODO + return nil, errors.New("the InfluxDB metric data repository does not yet support HWThreads or Accelerators") + } + hostsConds = append(hostsConds, fmt.Sprintf(`r["hostname"] == "%s"`, h.Hostname)) + } + hostsCond := strings.Join(hostsConds, " or ") + + jobData := make(schema.JobData) // Empty Schema: map[FIELD]map[SCOPE]<*JobMetric>METRIC + // Requested Scopes + for _, scope := range scopes { + query := "" + switch scope { + case "node": + // Get Finest Granularity, Groupy By Measurement and Hostname (== Metric / Node), Calculate Mean for 60s windows + // log.Println("Note: Scope 'node' requested. ") + query = fmt.Sprintf(` + from(bucket: "%s") + |> range(start: %s, stop: %s) + |> filter(fn: (r) => (%s) and (%s) ) + |> drop(columns: ["_start", "_stop"]) + |> group(columns: ["hostname", "_measurement"]) + |> aggregateWindow(every: 60s, fn: mean) + |> drop(columns: ["_time"])`, + idb.bucket, + idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix+int64(job.Duration)+int64(1))), + measurementsCond, hostsCond) + case "socket": + log.Println("Note: Scope 'socket' requested, but not yet supported: Will return 'node' scope only. ") + continue + case "core": + log.Println("Note: Scope 'core' requested, but not yet supported: Will return 'node' scope only. ") + continue + // Get Finest Granularity only, Set NULL to 0.0 + // query = fmt.Sprintf(` + // from(bucket: "%s") + // |> range(start: %s, stop: %s) + // |> filter(fn: (r) => %s ) + // |> filter(fn: (r) => %s ) + // |> drop(columns: ["_start", "_stop", "cluster"]) + // |> map(fn: (r) => (if exists r._value then {r with _value: r._value} else {r with _value: 0.0}))`, + // idb.bucket, + // idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix + int64(job.Duration) + int64(1) )), + // measurementsCond, hostsCond) + default: + log.Println("Note: Unknown Scope requested: Will return 'node' scope. ") + continue + // return nil, errors.New("the InfluxDB metric data repository does not yet support other scopes than 'node'") + } + + rows, err := idb.queryClient.Query(ctx, query) + if err != nil { + return nil, err + } + + // Init Metrics: Only Node level now -> TODO: Matching /check on scope level ... + for _, metric := range metrics { + jobMetric, ok := jobData[metric] + if !ok { + mc := config.GetMetricConfig(job.Cluster, metric) + jobMetric = map[schema.MetricScope]*schema.JobMetric{ + scope: { // uses scope var from above! + Unit: mc.Unit, + Scope: scope, + Timestep: mc.Timestep, + Series: make([]schema.Series, 0, len(job.Resources)), + StatisticsSeries: nil, // Should be: &schema.StatsSeries{}, + }, + } + } + jobData[metric] = jobMetric + } + + // Process Result: Time-Data + field, host, hostSeries := "", "", schema.Series{} + // typeId := 0 + switch scope { + case "node": + for rows.Next() { + row := rows.Record() + if host == "" || host != row.ValueByKey("hostname").(string) || rows.TableChanged() { + if host != "" { + // Append Series before reset + jobData[field][scope].Series = append(jobData[field][scope].Series, hostSeries) + } + field, host = row.Measurement(), row.ValueByKey("hostname").(string) + hostSeries = schema.Series{ + Hostname: host, + Statistics: nil, + Data: make([]schema.Float, 0), + } + } + val, ok := row.Value().(float64) + if ok { + hostSeries.Data = append(hostSeries.Data, schema.Float(val)) + } else { + hostSeries.Data = append(hostSeries.Data, schema.Float(0)) + } + } + case "socket": + continue + case "core": + continue + // Include Series.Id in hostSeries + // for rows.Next() { + // row := rows.Record() + // if ( host == "" || host != row.ValueByKey("hostname").(string) || typeId != row.ValueByKey("type-id").(int) || rows.TableChanged() ) { + // if ( host != "" ) { + // // Append Series before reset + // jobData[field][scope].Series = append(jobData[field][scope].Series, hostSeries) + // } + // field, host, typeId = row.Measurement(), row.ValueByKey("hostname").(string), row.ValueByKey("type-id").(int) + // hostSeries = schema.Series{ + // Hostname: host, + // Id: &typeId, + // Statistics: nil, + // Data: make([]schema.Float, 0), + // } + // } + // val := row.Value().(float64) + // hostSeries.Data = append(hostSeries.Data, schema.Float(val)) + // } + default: + continue + // return nil, errors.New("the InfluxDB metric data repository does not yet support other scopes than 'node, core'") + } + // Append last Series + jobData[field][scope].Series = append(jobData[field][scope].Series, hostSeries) + } + + // Get Stats + stats, err := idb.LoadStats(job, metrics, ctx) + if err != nil { + return nil, err + } + + for _, scope := range scopes { + if scope == "node" { // No 'socket/core' support yet + for metric, nodes := range stats { + // log.Println(fmt.Sprintf("<< Add Stats for : Field %s >>", metric)) + for node, stats := range nodes { + // log.Println(fmt.Sprintf("<< Add Stats for : Host %s : Min %.2f, Max %.2f, Avg %.2f >>", node, stats.Min, stats.Max, stats.Avg )) + for index, _ := range jobData[metric][scope].Series { + // log.Println(fmt.Sprintf("<< Try to add Stats to Series in Position %d >>", index)) + if jobData[metric][scope].Series[index].Hostname == node { + // log.Println(fmt.Sprintf("<< Match for Series in Position %d : Host %s >>", index, jobData[metric][scope].Series[index].Hostname)) + jobData[metric][scope].Series[index].Statistics = &schema.MetricStatistics{Avg: stats.Avg, Min: stats.Min, Max: stats.Max} + // log.Println(fmt.Sprintf("<< Result Inner: Min %.2f, Max %.2f, Avg %.2f >>", jobData[metric][scope].Series[index].Statistics.Min, jobData[metric][scope].Series[index].Statistics.Max, jobData[metric][scope].Series[index].Statistics.Avg)) + } + } + } + } + } + } + + // DEBUG: + // for _, scope := range scopes { + // for _, met := range metrics { + // for _, series := range jobData[met][scope].Series { + // log.Println(fmt.Sprintf("<< Result: %d data points for metric %s on %s with scope %s, Stats: Min %.2f, Max %.2f, Avg %.2f >>", + // len(series.Data), met, series.Hostname, scope, + // series.Statistics.Min, series.Statistics.Max, series.Statistics.Avg)) + // } + // } + // } + + return jobData, nil +} + +func (idb *InfluxDBv2DataRepository) LoadStats(job *schema.Job, metrics []string, ctx context.Context) (map[string]map[string]schema.MetricStatistics, error) { + + stats := map[string]map[string]schema.MetricStatistics{} + + hostsConds := make([]string, 0, len(job.Resources)) + for _, h := range job.Resources { + if h.HWThreads != nil || h.Accelerators != nil { + // TODO + return nil, errors.New("the InfluxDB metric data repository does not yet support HWThreads or Accelerators") + } + hostsConds = append(hostsConds, fmt.Sprintf(`r["hostname"] == "%s"`, h.Hostname)) + } + hostsCond := strings.Join(hostsConds, " or ") + + // lenMet := len(metrics) + + for _, metric := range metrics { + // log.Println(fmt.Sprintf("<< You are here: %s (Index %d of %d metrics)", metric, index, lenMet)) + + query := fmt.Sprintf(` + data = from(bucket: "%s") + |> range(start: %s, stop: %s) + |> filter(fn: (r) => r._measurement == "%s" and r._field == "value" and (%s)) + union(tables: [data |> mean(column: "_value") |> set(key: "_field", value: "avg"), + data |> min(column: "_value") |> set(key: "_field", value: "min"), + data |> max(column: "_value") |> set(key: "_field", value: "max")]) + |> pivot(rowKey: ["hostname"], columnKey: ["_field"], valueColumn: "_value") + |> group()`, + idb.bucket, + idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix+int64(job.Duration)+int64(1))), + metric, hostsCond) + + rows, err := idb.queryClient.Query(ctx, query) + if err != nil { + return nil, err + } + + nodes := map[string]schema.MetricStatistics{} + for rows.Next() { + row := rows.Record() + host := row.ValueByKey("hostname").(string) + + avg, avgok := row.ValueByKey("avg").(float64) + if !avgok { + // log.Println(fmt.Sprintf(">> Assertion error for metric %s, statistic AVG. Expected 'float64', got %v", metric, avg)) + avg = 0.0 + } + min, minok := row.ValueByKey("min").(float64) + if !minok { + // log.Println(fmt.Sprintf(">> Assertion error for metric %s, statistic MIN. Expected 'float64', got %v", metric, min)) + min = 0.0 + } + max, maxok := row.ValueByKey("max").(float64) + if !maxok { + // log.Println(fmt.Sprintf(">> Assertion error for metric %s, statistic MAX. Expected 'float64', got %v", metric, max)) + max = 0.0 + } + + nodes[host] = schema.MetricStatistics{ + Avg: avg, + Min: min, + Max: max, + } + } + stats[metric] = nodes + } + + return stats, nil +} + +func (idb *InfluxDBv2DataRepository) LoadNodeData(cluster string, metrics, nodes []string, scopes []schema.MetricScope, from, to time.Time, ctx context.Context) (map[string]map[string][]*schema.JobMetric, error) { + // TODO : Implement to be used in Analysis- und System/Node-View + log.Println(fmt.Sprintf("LoadNodeData unimplemented for InfluxDBv2DataRepository, Args: cluster %s, metrics %v, nodes %v, scopes %v", cluster, metrics, nodes, scopes)) + + return nil, errors.New("unimplemented for InfluxDBv2DataRepository") +} diff --git a/metricdata/metricdata.go b/internal/metricdata/metricdata.go similarity index 97% rename from metricdata/metricdata.go rename to internal/metricdata/metricdata.go index 24b44bc..d23015f 100644 --- a/metricdata/metricdata.go +++ b/internal/metricdata/metricdata.go @@ -6,10 +6,10 @@ import ( "fmt" "time" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/schema" - "github.com/iamlouk/lrucache" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/lrucache" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) type MetricDataRepository interface { diff --git a/metricdata/utils.go b/internal/metricdata/utils.go similarity index 95% rename from metricdata/utils.go rename to internal/metricdata/utils.go index 7a92c4d..a6c550b 100644 --- a/metricdata/utils.go +++ b/internal/metricdata/utils.go @@ -5,7 +5,7 @@ import ( "encoding/json" "time" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) var TestLoadDataCallback func(job *schema.Job, metrics []string, scopes []schema.MetricScope, ctx context.Context) (schema.JobData, error) = func(job *schema.Job, metrics []string, scopes []schema.MetricScope, ctx context.Context) (schema.JobData, error) { diff --git a/internal/repository/dbConnection.go b/internal/repository/dbConnection.go new file mode 100644 index 0000000..92ed703 --- /dev/null +++ b/internal/repository/dbConnection.go @@ -0,0 +1,58 @@ +package repository + +import ( + "fmt" + "log" + "sync" + "time" + + "github.com/jmoiron/sqlx" +) + +var ( + dbConnOnce sync.Once + dbConnInstance *DBConnection +) + +type DBConnection struct { + DB *sqlx.DB +} + +func Connect(driver string, db string) { + var err error + var dbHandle *sqlx.DB + + dbConnOnce.Do(func() { + if driver == "sqlite3" { + dbHandle, err = sqlx.Open("sqlite3", fmt.Sprintf("%s?_foreign_keys=on", db)) + if err != nil { + log.Fatal(err) + } + + // sqlite does not multithread. Having more than one connection open would just mean + // waiting for locks. + dbHandle.SetMaxOpenConns(1) + } else if driver == "mysql" { + dbHandle, err = sqlx.Open("mysql", fmt.Sprintf("%s?multiStatements=true", db)) + if err != nil { + log.Fatal(err) + } + + dbHandle.SetConnMaxLifetime(time.Minute * 3) + dbHandle.SetMaxOpenConns(10) + dbHandle.SetMaxIdleConns(10) + } else { + log.Fatalf("unsupported database driver: %s", driver) + } + + dbConnInstance = &DBConnection{DB: dbHandle} + }) +} + +func GetConnection() *DBConnection { + if dbConnInstance == nil { + log.Fatalf("Database connection not initialized!") + } + + return dbConnInstance +} diff --git a/repository/import.go b/internal/repository/import.go similarity index 95% rename from repository/import.go rename to internal/repository/import.go index a18c189..69a5c4f 100644 --- a/repository/import.go +++ b/internal/repository/import.go @@ -9,10 +9,10 @@ import ( "strings" "time" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/schema" ) const NamedJobInsert string = `INSERT INTO job ( diff --git a/repository/init.go b/internal/repository/init.go similarity index 98% rename from repository/init.go rename to internal/repository/init.go index 44b8bd6..a6b84a4 100644 --- a/repository/init.go +++ b/internal/repository/init.go @@ -8,8 +8,8 @@ import ( "path/filepath" "time" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/schema" "github.com/jmoiron/sqlx" ) diff --git a/repository/job.go b/internal/repository/job.go similarity index 95% rename from repository/job.go rename to internal/repository/job.go index c7d65cf..fd75e37 100644 --- a/repository/job.go +++ b/internal/repository/job.go @@ -7,17 +7,23 @@ import ( "errors" "fmt" "strconv" + "sync" "time" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/lrucache" + "github.com/ClusterCockpit/cc-backend/pkg/schema" sq "github.com/Masterminds/squirrel" - "github.com/iamlouk/lrucache" "github.com/jmoiron/sqlx" ) +var ( + jobRepoOnce sync.Once + jobRepoInstance *JobRepository +) + type JobRepository struct { DB *sqlx.DB @@ -25,10 +31,18 @@ type JobRepository struct { cache *lrucache.Cache } -func (r *JobRepository) Init() error { - r.stmtCache = sq.NewStmtCache(r.DB) - r.cache = lrucache.New(1024 * 1024) - return nil +func GetRepository() *JobRepository { + jobRepoOnce.Do(func() { + db := GetConnection() + + jobRepoInstance = &JobRepository{ + DB: db.DB, + stmtCache: sq.NewStmtCache(db.DB), + cache: lrucache.New(1024 * 1024), + } + }) + + return jobRepoInstance } var jobColumns []string = []string{ diff --git a/repository/job_test.go b/internal/repository/job_test.go similarity index 84% rename from repository/job_test.go rename to internal/repository/job_test.go index 5cf54bb..3f82d6b 100644 --- a/repository/job_test.go +++ b/internal/repository/job_test.go @@ -11,22 +11,11 @@ import ( var db *sqlx.DB func init() { - var err error - db, err = sqlx.Open("sqlite3", "../test/test.db") - if err != nil { - fmt.Println(err) - } + Connect("sqlite3", "../../test/test.db") } func setup(t *testing.T) *JobRepository { - r := &JobRepository{ - DB: db, - } - if err := r.Init(); err != nil { - t.Fatal(err) - } - - return r + return GetRepository() } func TestFind(t *testing.T) { diff --git a/repository/query.go b/internal/repository/query.go similarity index 96% rename from repository/query.go rename to internal/repository/query.go index 63c98aa..ae5b60b 100644 --- a/repository/query.go +++ b/internal/repository/query.go @@ -8,10 +8,10 @@ import ( "strings" "time" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/schema" sq "github.com/Masterminds/squirrel" ) diff --git a/repository/tags.go b/internal/repository/tags.go similarity index 97% rename from repository/tags.go rename to internal/repository/tags.go index 8e83bf1..411a5fc 100644 --- a/repository/tags.go +++ b/internal/repository/tags.go @@ -1,8 +1,8 @@ package repository import ( - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/pkg/schema" sq "github.com/Masterminds/squirrel" ) diff --git a/routes.go b/internal/routerConfig/routes.go similarity index 92% rename from routes.go rename to internal/routerConfig/routes.go index 9885b94..cb888eb 100644 --- a/routes.go +++ b/internal/routerConfig/routes.go @@ -1,4 +1,4 @@ -package main +package routerConfig import ( "fmt" @@ -8,13 +8,14 @@ import ( "strings" "time" - "github.com/ClusterCockpit/cc-backend/auth" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/graph" - "github.com/ClusterCockpit/cc-backend/graph/model" - "github.com/ClusterCockpit/cc-backend/log" - "github.com/ClusterCockpit/cc-backend/schema" - "github.com/ClusterCockpit/cc-backend/templates" + "github.com/ClusterCockpit/cc-backend/internal/auth" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/graph" + "github.com/ClusterCockpit/cc-backend/internal/graph/model" + "github.com/ClusterCockpit/cc-backend/internal/repository" + "github.com/ClusterCockpit/cc-backend/internal/templates" + "github.com/ClusterCockpit/cc-backend/pkg/log" + "github.com/ClusterCockpit/cc-backend/pkg/schema" "github.com/gorilla/mux" ) @@ -50,6 +51,7 @@ func setupHomeRoute(i InfoType, r *http.Request) InfoType { TotalJobs int RecentShortJobs int } + jobRepo := repository.GetRepository() runningJobs, err := jobRepo.CountGroupedJobs(r.Context(), model.AggregateCluster, []*model.JobFilter{{ State: []schema.JobState{schema.JobStateRunning}, @@ -93,6 +95,7 @@ func setupJobRoute(i InfoType, r *http.Request) InfoType { } func setupUserRoute(i InfoType, r *http.Request) InfoType { + jobRepo := repository.GetRepository() username := mux.Vars(r)["id"] i["id"] = username i["username"] = username @@ -135,6 +138,7 @@ func setupAnalysisRoute(i InfoType, r *http.Request) InfoType { func setupTaglistRoute(i InfoType, r *http.Request) InfoType { var username *string = nil + jobRepo := repository.GetRepository() if user := auth.GetUser(r.Context()); user != nil && !user.HasRole(auth.RoleAdmin) { username = &user.Username } @@ -245,7 +249,7 @@ func buildFilterPresets(query url.Values) map[string]interface{} { return filterPresets } -func setupRoutes(router *mux.Router, routes []Route) { +func SetupRoutes(router *mux.Router) { for _, route := range routes { route := route router.HandleFunc(route.Route, func(rw http.ResponseWriter, r *http.Request) { diff --git a/runtimeSetup.go b/internal/runtimeEnv/setup.go similarity index 89% rename from runtimeSetup.go rename to internal/runtimeEnv/setup.go index f43e569..aa6aef3 100644 --- a/runtimeSetup.go +++ b/internal/runtimeEnv/setup.go @@ -1,4 +1,4 @@ -package main +package runtimeEnv import ( "bufio" @@ -15,7 +15,7 @@ import ( // Very simple and limited .env file reader. // All variable definitions found are directly // added to the processes environment. -func loadEnv(file string) error { +func LoadEnv(file string) error { f, err := os.Open(file) if err != nil { return err @@ -81,9 +81,9 @@ func loadEnv(file string) error { // specified in the config.json. The go runtime // takes care of all threads (and not only the calling one) // executing the underlying systemcall. -func dropPrivileges() error { - if programConfig.Group != "" { - g, err := user.LookupGroup(programConfig.Group) +func DropPrivileges(username string, group string) error { + if group != "" { + g, err := user.LookupGroup(group) if err != nil { return err } @@ -94,8 +94,8 @@ func dropPrivileges() error { } } - if programConfig.User != "" { - u, err := user.Lookup(programConfig.User) + if username != "" { + u, err := user.Lookup(username) if err != nil { return err } @@ -111,7 +111,7 @@ func dropPrivileges() error { // If started via systemd, inform systemd that we are running: // https://www.freedesktop.org/software/systemd/man/sd_notify.html -func systemdNotifiy(ready bool, status string) { +func SystemdNotifiy(ready bool, status string) { if os.Getenv("NOTIFY_SOCKET") == "" { // Not started using systemd return diff --git a/templates/templates.go b/internal/templates/templates.go similarity index 94% rename from templates/templates.go rename to internal/templates/templates.go index 0d0b956..31653b0 100644 --- a/templates/templates.go +++ b/internal/templates/templates.go @@ -5,8 +5,8 @@ import ( "net/http" "os" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/log" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/pkg/log" ) var templatesDir string @@ -36,7 +36,7 @@ func init() { if ebp != "" { bp = ebp } - templatesDir = bp + "templates/" + templatesDir = bp + "web/templates/" base := template.Must(template.ParseFiles(templatesDir + "base.tmpl")) files := []string{ "home.tmpl", "404.tmpl", "login.tmpl", diff --git a/metricdata/influxdb-v2.go b/metricdata/influxdb-v2.go deleted file mode 100644 index 11a8235..0000000 --- a/metricdata/influxdb-v2.go +++ /dev/null @@ -1,308 +0,0 @@ -package metricdata - -import ( - "context" - "errors" - "fmt" - "log" - "strings" - "time" - "crypto/tls" - "encoding/json" - - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/schema" - influxdb2 "github.com/influxdata/influxdb-client-go/v2" - influxdb2Api "github.com/influxdata/influxdb-client-go/v2/api" -) - -type InfluxDBv2DataRepositoryConfig struct { - Url string `json:"url"` - Token string `json:"token"` - Bucket string `json:"bucket"` - Org string `json:"org"` - SkipTls bool `json:"skiptls"` -} - -type InfluxDBv2DataRepository struct { - client influxdb2.Client - queryClient influxdb2Api.QueryAPI - bucket, measurement string -} - -func (idb *InfluxDBv2DataRepository) Init(rawConfig json.RawMessage) error { - var config InfluxDBv2DataRepositoryConfig - if err := json.Unmarshal(rawConfig, &config); err != nil { - return err - } - - idb.client = influxdb2.NewClientWithOptions(config.Url, config.Token, influxdb2.DefaultOptions().SetTLSConfig(&tls.Config {InsecureSkipVerify: config.SkipTls,} )) - idb.queryClient = idb.client.QueryAPI(config.Org) - idb.bucket = config.Bucket - - return nil -} - -func (idb *InfluxDBv2DataRepository) formatTime(t time.Time) string { - return t.Format(time.RFC3339) // Like “2006-01-02T15:04:05Z07:00” -} - -func (idb *InfluxDBv2DataRepository) epochToTime(epoch int64) time.Time { - return time.Unix(epoch, 0) -} - -func (idb *InfluxDBv2DataRepository) LoadData(job *schema.Job, metrics []string, scopes []schema.MetricScope, ctx context.Context) (schema.JobData, error) { - - measurementsConds := make([]string, 0, len(metrics)) - for _, m := range metrics { - measurementsConds = append(measurementsConds, fmt.Sprintf(`r["_measurement"] == "%s"`, m)) - } - measurementsCond := strings.Join(measurementsConds, " or ") - - hostsConds := make([]string, 0, len(job.Resources)) - for _, h := range job.Resources { - if h.HWThreads != nil || h.Accelerators != nil { - // TODO - return nil, errors.New("the InfluxDB metric data repository does not yet support HWThreads or Accelerators") - } - hostsConds = append(hostsConds, fmt.Sprintf(`r["hostname"] == "%s"`, h.Hostname)) - } - hostsCond := strings.Join(hostsConds, " or ") - - jobData := make(schema.JobData) // Empty Schema: map[FIELD]map[SCOPE]<*JobMetric>METRIC - // Requested Scopes - for _, scope := range scopes { - query := "" - switch scope { - case "node": - // Get Finest Granularity, Groupy By Measurement and Hostname (== Metric / Node), Calculate Mean for 60s windows - // log.Println("Note: Scope 'node' requested. ") - query = fmt.Sprintf(` - from(bucket: "%s") - |> range(start: %s, stop: %s) - |> filter(fn: (r) => (%s) and (%s) ) - |> drop(columns: ["_start", "_stop"]) - |> group(columns: ["hostname", "_measurement"]) - |> aggregateWindow(every: 60s, fn: mean) - |> drop(columns: ["_time"])`, - idb.bucket, - idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix + int64(job.Duration) + int64(1) )), - measurementsCond, hostsCond) - case "socket": - log.Println("Note: Scope 'socket' requested, but not yet supported: Will return 'node' scope only. ") - continue - case "core": - log.Println("Note: Scope 'core' requested, but not yet supported: Will return 'node' scope only. ") - continue - // Get Finest Granularity only, Set NULL to 0.0 - // query = fmt.Sprintf(` - // from(bucket: "%s") - // |> range(start: %s, stop: %s) - // |> filter(fn: (r) => %s ) - // |> filter(fn: (r) => %s ) - // |> drop(columns: ["_start", "_stop", "cluster"]) - // |> map(fn: (r) => (if exists r._value then {r with _value: r._value} else {r with _value: 0.0}))`, - // idb.bucket, - // idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix + int64(job.Duration) + int64(1) )), - // measurementsCond, hostsCond) - default: - log.Println("Note: Unknown Scope requested: Will return 'node' scope. ") - continue - // return nil, errors.New("the InfluxDB metric data repository does not yet support other scopes than 'node'") - } - - rows, err := idb.queryClient.Query(ctx, query) - if err != nil { - return nil, err - } - - // Init Metrics: Only Node level now -> TODO: Matching /check on scope level ... - for _, metric := range metrics { - jobMetric, ok := jobData[metric] - if !ok { - mc := config.GetMetricConfig(job.Cluster, metric) - jobMetric = map[schema.MetricScope]*schema.JobMetric{ - scope: { // uses scope var from above! - Unit: mc.Unit, - Scope: scope, - Timestep: mc.Timestep, - Series: make([]schema.Series, 0, len(job.Resources)), - StatisticsSeries: nil, // Should be: &schema.StatsSeries{}, - }, - } - } - jobData[metric] = jobMetric - } - - // Process Result: Time-Data - field, host, hostSeries := "", "", schema.Series{} - // typeId := 0 - switch scope { - case "node": - for rows.Next() { - row := rows.Record() - if ( host == "" || host != row.ValueByKey("hostname").(string) || rows.TableChanged() ) { - if ( host != "" ) { - // Append Series before reset - jobData[field][scope].Series = append(jobData[field][scope].Series, hostSeries) - } - field, host = row.Measurement(), row.ValueByKey("hostname").(string) - hostSeries = schema.Series{ - Hostname: host, - Statistics: nil, - Data: make([]schema.Float, 0), - } - } - val, ok := row.Value().(float64) - if ok { - hostSeries.Data = append(hostSeries.Data, schema.Float(val)) - } else { - hostSeries.Data = append(hostSeries.Data, schema.Float(0)) - } - } - case "socket": - continue - case "core": - continue - // Include Series.Id in hostSeries - // for rows.Next() { - // row := rows.Record() - // if ( host == "" || host != row.ValueByKey("hostname").(string) || typeId != row.ValueByKey("type-id").(int) || rows.TableChanged() ) { - // if ( host != "" ) { - // // Append Series before reset - // jobData[field][scope].Series = append(jobData[field][scope].Series, hostSeries) - // } - // field, host, typeId = row.Measurement(), row.ValueByKey("hostname").(string), row.ValueByKey("type-id").(int) - // hostSeries = schema.Series{ - // Hostname: host, - // Id: &typeId, - // Statistics: nil, - // Data: make([]schema.Float, 0), - // } - // } - // val := row.Value().(float64) - // hostSeries.Data = append(hostSeries.Data, schema.Float(val)) - // } - default: - continue - // return nil, errors.New("the InfluxDB metric data repository does not yet support other scopes than 'node, core'") - } - // Append last Series - jobData[field][scope].Series = append(jobData[field][scope].Series, hostSeries) - } - - // Get Stats - stats, err := idb.LoadStats(job, metrics, ctx) - if err != nil { - return nil, err - } - - for _, scope := range scopes { - if scope == "node" { // No 'socket/core' support yet - for metric, nodes := range stats { - // log.Println(fmt.Sprintf("<< Add Stats for : Field %s >>", metric)) - for node, stats := range nodes { - // log.Println(fmt.Sprintf("<< Add Stats for : Host %s : Min %.2f, Max %.2f, Avg %.2f >>", node, stats.Min, stats.Max, stats.Avg )) - for index, _ := range jobData[metric][scope].Series { - // log.Println(fmt.Sprintf("<< Try to add Stats to Series in Position %d >>", index)) - if jobData[metric][scope].Series[index].Hostname == node { - // log.Println(fmt.Sprintf("<< Match for Series in Position %d : Host %s >>", index, jobData[metric][scope].Series[index].Hostname)) - jobData[metric][scope].Series[index].Statistics = &schema.MetricStatistics{Avg: stats.Avg, Min: stats.Min, Max: stats.Max} - // log.Println(fmt.Sprintf("<< Result Inner: Min %.2f, Max %.2f, Avg %.2f >>", jobData[metric][scope].Series[index].Statistics.Min, jobData[metric][scope].Series[index].Statistics.Max, jobData[metric][scope].Series[index].Statistics.Avg)) - } - } - } - } - } - } - - // DEBUG: - // for _, scope := range scopes { - // for _, met := range metrics { - // for _, series := range jobData[met][scope].Series { - // log.Println(fmt.Sprintf("<< Result: %d data points for metric %s on %s with scope %s, Stats: Min %.2f, Max %.2f, Avg %.2f >>", - // len(series.Data), met, series.Hostname, scope, - // series.Statistics.Min, series.Statistics.Max, series.Statistics.Avg)) - // } - // } - // } - - return jobData, nil -} - -func (idb *InfluxDBv2DataRepository) LoadStats(job *schema.Job, metrics []string, ctx context.Context) (map[string]map[string]schema.MetricStatistics, error) { - - stats := map[string]map[string]schema.MetricStatistics{} - - hostsConds := make([]string, 0, len(job.Resources)) - for _, h := range job.Resources { - if h.HWThreads != nil || h.Accelerators != nil { - // TODO - return nil, errors.New("the InfluxDB metric data repository does not yet support HWThreads or Accelerators") - } - hostsConds = append(hostsConds, fmt.Sprintf(`r["hostname"] == "%s"`, h.Hostname)) - } - hostsCond := strings.Join(hostsConds, " or ") - - // lenMet := len(metrics) - - for _, metric := range metrics { - // log.Println(fmt.Sprintf("<< You are here: %s (Index %d of %d metrics)", metric, index, lenMet)) - - query := fmt.Sprintf(` - data = from(bucket: "%s") - |> range(start: %s, stop: %s) - |> filter(fn: (r) => r._measurement == "%s" and r._field == "value" and (%s)) - union(tables: [data |> mean(column: "_value") |> set(key: "_field", value: "avg"), - data |> min(column: "_value") |> set(key: "_field", value: "min"), - data |> max(column: "_value") |> set(key: "_field", value: "max")]) - |> pivot(rowKey: ["hostname"], columnKey: ["_field"], valueColumn: "_value") - |> group()`, - idb.bucket, - idb.formatTime(job.StartTime), idb.formatTime(idb.epochToTime(job.StartTimeUnix + int64(job.Duration) + int64(1) )), - metric, hostsCond) - - rows, err := idb.queryClient.Query(ctx, query) - if err != nil { - return nil, err - } - - nodes := map[string]schema.MetricStatistics{} - for rows.Next() { - row := rows.Record() - host := row.ValueByKey("hostname").(string) - - avg, avgok := row.ValueByKey("avg").(float64) - if !avgok { - // log.Println(fmt.Sprintf(">> Assertion error for metric %s, statistic AVG. Expected 'float64', got %v", metric, avg)) - avg = 0.0 - } - min, minok := row.ValueByKey("min").(float64) - if !minok { - // log.Println(fmt.Sprintf(">> Assertion error for metric %s, statistic MIN. Expected 'float64', got %v", metric, min)) - min = 0.0 - } - max, maxok := row.ValueByKey("max").(float64) - if !maxok { - // log.Println(fmt.Sprintf(">> Assertion error for metric %s, statistic MAX. Expected 'float64', got %v", metric, max)) - max = 0.0 - } - - nodes[host] = schema.MetricStatistics{ - Avg: avg, - Min: min, - Max: max, - } - } - stats[metric] = nodes - } - - return stats, nil -} - -func (idb *InfluxDBv2DataRepository) LoadNodeData(cluster string, metrics, nodes []string, scopes []schema.MetricScope, from, to time.Time, ctx context.Context) (map[string]map[string][]*schema.JobMetric, error) { - // TODO : Implement to be used in Analysis- und System/Node-View - log.Println(fmt.Sprintf("LoadNodeData unimplemented for InfluxDBv2DataRepository, Args: cluster %s, metrics %v, nodes %v, scopes %v", cluster, metrics, nodes, scopes)) - - return nil, errors.New("unimplemented for InfluxDBv2DataRepository") -} diff --git a/log/log.go b/pkg/log/log.go similarity index 100% rename from log/log.go rename to pkg/log/log.go diff --git a/pkg/lrucache/README.md b/pkg/lrucache/README.md new file mode 100644 index 0000000..8cd2751 --- /dev/null +++ b/pkg/lrucache/README.md @@ -0,0 +1,121 @@ +# In-Memory LRU Cache for Golang Applications + +[![](https://pkg.go.dev/badge/github.com/iamlouk/lrucache?utm_source=godoc)](https://pkg.go.dev/github.com/iamlouk/lrucache) + +This library can be embedded into your existing go applications +and play the role *Memcached* or *Redis* might play for others. +It is inspired by [PHP Symfony's Cache Components](https://symfony.com/doc/current/components/cache/adapters/array_cache_adapter.html), +having a similar API. This library can not be used for persistance, +is not properly tested yet and a bit special in a few ways described +below (Especially with regards to the memory usage/`size`). + +In addition to the interface described below, a `http.Handler` that can be used as middleware is provided as well. + +- Advantages: + - Anything (`interface{}`) can be stored as value + - As it lives in the application itself, no serialization or de-serialization is needed + - As it lives in the application itself, no memory moving/networking is needed + - The computation of a new value for a key does __not__ block the full cache (only the key) +- Disadvantages: + - You have to provide a size estimate for every value + - __This size estimate should not change (i.e. values should not mutate)__ + - The cache can only be accessed by one application + +## Example + +```go +// Go look at the godocs and ./cache_test.go for more documentation and examples + +maxMemory := 1000 +cache := lrucache.New(maxMemory) + +bar = cache.Get("foo", func () (value interface{}, ttl time.Duration, size int) { + return "bar", 10 * time.Second, len("bar") +}).(string) + +// bar == "bar" + +bar = cache.Get("foo", func () (value interface{}, ttl time.Duration, size int) { + panic("will not be called") +}).(string) +``` + +## Why does `cache.Get` take a function as argument? + +*Using the mechanism described below is optional, the second argument to `Get` can be `nil` and there is a `Put` function as well.* + +Because this library is meant to be used by multi threaded applications and the following would +result in the same data being fetched twice if both goroutines run in parallel: + +```go +// This code shows what could happen with other cache libraries +c := lrucache.New(MAX_CACHE_ENTRIES) + +for i := 0; i < 2; i++ { + go func(){ + // This code will run twice in different goroutines, + // it could overlap. As `fetchData` probably does some + // I/O and takes a long time, the probability of both + // goroutines calling `fetchData` is very high! + url := "http://example.com/foo" + contents := c.Get(url) + if contents == nil { + contents = fetchData(url) + c.Set(url, contents) + } + + handleData(contents.([]byte)) + }() +} + +``` + +Here, if one wanted to make sure that only one of both goroutines fetches the data, +the programmer would need to build his own synchronization. That would suck! + +```go +c := lrucache.New(MAX_CACHE_SIZE) + +for i := 0; i < 2; i++ { + go func(){ + url := "http://example.com/foo" + contents := c.Get(url, func()(interface{}, time.Time, int) { + // This closure will only be called once! + // If another goroutine calls `c.Get` while this closure + // is still being executed, it will wait. + buf := fetchData(url) + return buf, 100 * time.Second, len(buf) + }) + + handleData(contents.([]byte)) + }() +} + +``` + +This is much better as less resources are wasted and synchronization is handled by +the library. If it gets called, the call to the closure happens synchronously. While +it is being executed, all other cache keys can still be accessed without having to wait +for the execution to be done. + +## How `Get` works + +The closure passed to `Get` will be called if the value asked for is not cached or +expired. It should return the following values: + +- The value corresponding to that key and to be stored in the cache +- The time to live for that value (how long until it expires and needs to be recomputed) +- A size estimate + +When `maxMemory` is reached, cache entries need to be evicted. Theoretically, +it would be possible to use reflection on every value placed in the cache +to get its exact size in bytes. This would be very expansive and slow though. +Also, size can change. Instead of this library calculating the size in bytes, you, the user, +have to provide a size for every value in whatever unit you like (as long as it is the same unit everywhere). + +Suggestions on what to use as size: `len(str)` for strings, `len(slice) * size_of_slice_type`, etc.. It is possible +to use `1` as size for every entry, in that case at most `maxMemory` entries will be in the cache at the same time. + +## Affects on GC + +Because of the way a garbage collector decides when to run ([explained in the runtime package](https://pkg.go.dev/runtime)), having large amounts of data sitting in your cache might increase the memory consumption of your process by two times the maximum size of the cache. You can decrease the *target percentage* to reduce the effect, but then you might have negative performance effects when your cache is not filled. diff --git a/pkg/lrucache/cache.go b/pkg/lrucache/cache.go new file mode 100644 index 0000000..aedfd5c --- /dev/null +++ b/pkg/lrucache/cache.go @@ -0,0 +1,288 @@ +package lrucache + +import ( + "sync" + "time" +) + +// Type of the closure that must be passed to `Get` to +// compute the value in case it is not cached. +// +// returned values are the computed value to be stored in the cache, +// the duration until this value will expire and a size estimate. +type ComputeValue func() (value interface{}, ttl time.Duration, size int) + +type cacheEntry struct { + key string + value interface{} + + expiration time.Time + size int + waitingForComputation int + + next, prev *cacheEntry +} + +type Cache struct { + mutex sync.Mutex + cond *sync.Cond + maxmemory, usedmemory int + entries map[string]*cacheEntry + head, tail *cacheEntry +} + +// Return a new instance of a LRU In-Memory Cache. +// Read [the README](./README.md) for more information +// on what is going on with `maxmemory`. +func New(maxmemory int) *Cache { + cache := &Cache{ + maxmemory: maxmemory, + entries: map[string]*cacheEntry{}, + } + cache.cond = sync.NewCond(&cache.mutex) + return cache +} + +// Return the cached value for key `key` or call `computeValue` and +// store its return value in the cache. If called, the closure will be +// called synchronous and __shall not call methods on the same cache__ +// or a deadlock might ocure. If `computeValue` is nil, the cache is checked +// and if no entry was found, nil is returned. If another goroutine is currently +// computing that value, the result is waited for. +func (c *Cache) Get(key string, computeValue ComputeValue) interface{} { + now := time.Now() + + c.mutex.Lock() + if entry, ok := c.entries[key]; ok { + // The expiration not being set is what shows us that + // the computation of that value is still ongoing. + for entry.expiration.IsZero() { + entry.waitingForComputation += 1 + c.cond.Wait() + entry.waitingForComputation -= 1 + } + + if now.After(entry.expiration) { + if !c.evictEntry(entry) { + if entry.expiration.IsZero() { + panic("cache entry that shoud have been waited for could not be evicted.") + } + c.mutex.Unlock() + return entry.value + } + } else { + if entry != c.head { + c.unlinkEntry(entry) + c.insertFront(entry) + } + c.mutex.Unlock() + return entry.value + } + } + + if computeValue == nil { + c.mutex.Unlock() + return nil + } + + entry := &cacheEntry{ + key: key, + waitingForComputation: 1, + } + + c.entries[key] = entry + + hasPaniced := true + defer func() { + if hasPaniced { + c.mutex.Lock() + delete(c.entries, key) + entry.expiration = now + entry.waitingForComputation -= 1 + } + c.mutex.Unlock() + }() + + c.mutex.Unlock() + value, ttl, size := computeValue() + c.mutex.Lock() + hasPaniced = false + + entry.value = value + entry.expiration = now.Add(ttl) + entry.size = size + entry.waitingForComputation -= 1 + + // Only broadcast if other goroutines are actually waiting + // for a result. + if entry.waitingForComputation > 0 { + // TODO: Have more than one condition variable so that there are + // less unnecessary wakeups. + c.cond.Broadcast() + } + + c.usedmemory += size + c.insertFront(entry) + + // Evict only entries with a size of more than zero. + // This is the only loop in the implementation outside of the `Keys` + // method. + evictionCandidate := c.tail + for c.usedmemory > c.maxmemory && evictionCandidate != nil { + nextCandidate := evictionCandidate.prev + if (evictionCandidate.size > 0 || now.After(evictionCandidate.expiration)) && + evictionCandidate.waitingForComputation == 0 { + c.evictEntry(evictionCandidate) + } + evictionCandidate = nextCandidate + } + + return value +} + +// Put a new value in the cache. If another goroutine is calling `Get` and +// computing the value, this function waits for the computation to be done +// before it overwrites the value. +func (c *Cache) Put(key string, value interface{}, size int, ttl time.Duration) { + now := time.Now() + c.mutex.Lock() + defer c.mutex.Unlock() + + if entry, ok := c.entries[key]; ok { + for entry.expiration.IsZero() { + entry.waitingForComputation += 1 + c.cond.Wait() + entry.waitingForComputation -= 1 + } + + c.usedmemory -= entry.size + entry.expiration = now.Add(ttl) + entry.size = size + entry.value = value + c.usedmemory += entry.size + + c.unlinkEntry(entry) + c.insertFront(entry) + return + } + + entry := &cacheEntry{ + key: key, + value: value, + expiration: now.Add(ttl), + } + c.entries[key] = entry + c.insertFront(entry) +} + +// Remove the value at key `key` from the cache. +// Return true if the key was in the cache and false +// otherwise. It is possible that true is returned even +// though the value already expired. +// It is possible that false is returned even though the value +// will show up in the cache if this function is called on a key +// while that key is beeing computed. +func (c *Cache) Del(key string) bool { + c.mutex.Lock() + defer c.mutex.Unlock() + + if entry, ok := c.entries[key]; ok { + return c.evictEntry(entry) + } + return false +} + +// Call f for every entry in the cache. Some sanity checks +// and eviction of expired keys are done as well. +// The cache is fully locked for the complete duration of this call! +func (c *Cache) Keys(f func(key string, val interface{})) { + c.mutex.Lock() + defer c.mutex.Unlock() + + now := time.Now() + + size := 0 + for key, e := range c.entries { + if key != e.key { + panic("key mismatch") + } + + if now.After(e.expiration) { + if c.evictEntry(e) { + continue + } + } + + if e.prev != nil { + if e.prev.next != e { + panic("list corrupted") + } + } + + if e.next != nil { + if e.next.prev != e { + panic("list corrupted") + } + } + + size += e.size + f(key, e.value) + } + + if size != c.usedmemory { + panic("size calculations failed") + } + + if c.head != nil { + if c.tail == nil || c.head.prev != nil { + panic("head/tail corrupted") + } + } + + if c.tail != nil { + if c.head == nil || c.tail.next != nil { + panic("head/tail corrupted") + } + } +} + +func (c *Cache) insertFront(e *cacheEntry) { + e.next = c.head + c.head = e + + e.prev = nil + if e.next != nil { + e.next.prev = e + } + + if c.tail == nil { + c.tail = e + } +} + +func (c *Cache) unlinkEntry(e *cacheEntry) { + if e == c.head { + c.head = e.next + } + if e.prev != nil { + e.prev.next = e.next + } + if e.next != nil { + e.next.prev = e.prev + } + if e == c.tail { + c.tail = e.prev + } +} + +func (c *Cache) evictEntry(e *cacheEntry) bool { + if e.waitingForComputation != 0 { + // panic("cannot evict this entry as other goroutines need the value") + return false + } + + c.unlinkEntry(e) + c.usedmemory -= e.size + delete(c.entries, e.key) + return true +} diff --git a/pkg/lrucache/cache_test.go b/pkg/lrucache/cache_test.go new file mode 100644 index 0000000..bfab653 --- /dev/null +++ b/pkg/lrucache/cache_test.go @@ -0,0 +1,219 @@ +package lrucache + +import ( + "sync" + "sync/atomic" + "testing" + "time" +) + +func TestBasics(t *testing.T) { + cache := New(123) + + value1 := cache.Get("foo", func() (interface{}, time.Duration, int) { + return "bar", 1 * time.Second, 0 + }) + + if value1.(string) != "bar" { + t.Error("cache returned wrong value") + } + + value2 := cache.Get("foo", func() (interface{}, time.Duration, int) { + t.Error("value should be cached") + return "", 0, 0 + }) + + if value2.(string) != "bar" { + t.Error("cache returned wrong value") + } + + existed := cache.Del("foo") + if !existed { + t.Error("delete did not work as expected") + } + + value3 := cache.Get("foo", func() (interface{}, time.Duration, int) { + return "baz", 1 * time.Second, 0 + }) + + if value3.(string) != "baz" { + t.Error("cache returned wrong value") + } + + cache.Keys(func(key string, value interface{}) { + if key != "foo" || value.(string) != "baz" { + t.Error("cache corrupted") + } + }) +} + +func TestExpiration(t *testing.T) { + cache := New(123) + + failIfCalled := func() (interface{}, time.Duration, int) { + t.Error("Value should be cached!") + return "", 0, 0 + } + + val1 := cache.Get("foo", func() (interface{}, time.Duration, int) { + return "bar", 5 * time.Millisecond, 0 + }) + val2 := cache.Get("bar", func() (interface{}, time.Duration, int) { + return "foo", 20 * time.Millisecond, 0 + }) + + val3 := cache.Get("foo", failIfCalled).(string) + val4 := cache.Get("bar", failIfCalled).(string) + + if val1 != val3 || val3 != "bar" || val2 != val4 || val4 != "foo" { + t.Error("Wrong values returned") + } + + time.Sleep(10 * time.Millisecond) + + val5 := cache.Get("foo", func() (interface{}, time.Duration, int) { + return "baz", 0, 0 + }) + val6 := cache.Get("bar", failIfCalled) + + if val5.(string) != "baz" || val6.(string) != "foo" { + t.Error("unexpected values") + } + + cache.Keys(func(key string, val interface{}) { + if key != "bar" || val.(string) != "foo" { + t.Error("wrong value expired") + } + }) + + time.Sleep(15 * time.Millisecond) + cache.Keys(func(key string, val interface{}) { + t.Error("cache should be empty now") + }) +} + +func TestEviction(t *testing.T) { + c := New(100) + failIfCalled := func() (interface{}, time.Duration, int) { + t.Error("Value should be cached!") + return "", 0, 0 + } + + v1 := c.Get("foo", func() (interface{}, time.Duration, int) { + return "bar", 1 * time.Second, 1000 + }) + + v2 := c.Get("foo", func() (interface{}, time.Duration, int) { + return "baz", 1 * time.Second, 1000 + }) + + if v1.(string) != "bar" || v2.(string) != "baz" { + t.Error("wrong values returned") + } + + c.Keys(func(key string, val interface{}) { + t.Error("cache should be empty now") + }) + + _ = c.Get("A", func() (interface{}, time.Duration, int) { + return "a", 1 * time.Second, 50 + }) + + _ = c.Get("B", func() (interface{}, time.Duration, int) { + return "b", 1 * time.Second, 50 + }) + + _ = c.Get("A", failIfCalled) + _ = c.Get("B", failIfCalled) + _ = c.Get("C", func() (interface{}, time.Duration, int) { + return "c", 1 * time.Second, 50 + }) + + _ = c.Get("B", failIfCalled) + _ = c.Get("C", failIfCalled) + + v4 := c.Get("A", func() (interface{}, time.Duration, int) { + return "evicted", 1 * time.Second, 25 + }) + + if v4.(string) != "evicted" { + t.Error("value should have been evicted") + } + + c.Keys(func(key string, val interface{}) { + if key != "A" && key != "C" { + t.Errorf("'%s' was not expected", key) + } + }) +} + +// I know that this is a shity test, +// time is relative and unreliable. +func TestConcurrency(t *testing.T) { + c := New(100) + var wg sync.WaitGroup + + numActions := 20000 + numThreads := 4 + wg.Add(numThreads) + + var concurrentModifications int32 = 0 + + for i := 0; i < numThreads; i++ { + go func() { + for j := 0; j < numActions; j++ { + _ = c.Get("key", func() (interface{}, time.Duration, int) { + m := atomic.AddInt32(&concurrentModifications, 1) + if m != 1 { + t.Error("only one goroutine at a time should calculate a value for the same key") + } + + time.Sleep(1 * time.Millisecond) + atomic.AddInt32(&concurrentModifications, -1) + return "value", 3 * time.Millisecond, 1 + }) + } + + wg.Done() + }() + } + + wg.Wait() + + c.Keys(func(key string, val interface{}) {}) +} + +func TestPanic(t *testing.T) { + c := New(100) + + c.Put("bar", "baz", 3, 1*time.Minute) + + testpanic := func() { + defer func() { + if r := recover(); r != nil { + if r.(string) != "oops" { + t.Fatal("unexpected panic value") + } + } + }() + + _ = c.Get("foo", func() (value interface{}, ttl time.Duration, size int) { + panic("oops") + }) + + t.Fatal("should have paniced!") + } + + testpanic() + + v := c.Get("bar", func() (value interface{}, ttl time.Duration, size int) { + t.Fatal("should not be called!") + return nil, 0, 0 + }) + + if v.(string) != "baz" { + t.Fatal("unexpected value") + } + + testpanic() +} diff --git a/pkg/lrucache/handler.go b/pkg/lrucache/handler.go new file mode 100644 index 0000000..e83ba10 --- /dev/null +++ b/pkg/lrucache/handler.go @@ -0,0 +1,120 @@ +package lrucache + +import ( + "bytes" + "net/http" + "strconv" + "time" +) + +// HttpHandler is can be used as HTTP Middleware in order to cache requests, +// for example static assets. By default, the request's raw URI is used as key and nothing else. +// Results with a status code other than 200 are cached with a TTL of zero seconds, +// so basically re-fetched as soon as the current fetch is done and a new request +// for that URI is done. +type HttpHandler struct { + cache *Cache + fetcher http.Handler + defaultTTL time.Duration + + // Allows overriding the way the cache key is extracted + // from the http request. The defailt is to use the RequestURI. + CacheKey func(*http.Request) string +} + +var _ http.Handler = (*HttpHandler)(nil) + +type cachedResponseWriter struct { + w http.ResponseWriter + statusCode int + buf bytes.Buffer +} + +type cachedResponse struct { + headers http.Header + statusCode int + data []byte + fetched time.Time +} + +var _ http.ResponseWriter = (*cachedResponseWriter)(nil) + +func (crw *cachedResponseWriter) Header() http.Header { + return crw.w.Header() +} + +func (crw *cachedResponseWriter) Write(bytes []byte) (int, error) { + return crw.buf.Write(bytes) +} + +func (crw *cachedResponseWriter) WriteHeader(statusCode int) { + crw.statusCode = statusCode +} + +// Returns a new caching HttpHandler. If no entry in the cache is found or it was too old, `fetcher` is called with +// a modified http.ResponseWriter and the response is stored in the cache. If `fetcher` sets the "Expires" header, +// the ttl is set appropriately (otherwise, the default ttl passed as argument here is used). +// `maxmemory` should be in the unit bytes. +func NewHttpHandler(maxmemory int, ttl time.Duration, fetcher http.Handler) *HttpHandler { + return &HttpHandler{ + cache: New(maxmemory), + defaultTTL: ttl, + fetcher: fetcher, + CacheKey: func(r *http.Request) string { + return r.RequestURI + }, + } +} + +// gorilla/mux style middleware: +func NewMiddleware(maxmemory int, ttl time.Duration) func(http.Handler) http.Handler { + return func(next http.Handler) http.Handler { + return NewHttpHandler(maxmemory, ttl, next) + } +} + +// Tries to serve a response to r from cache or calls next and stores the response to the cache for the next time. +func (h *HttpHandler) ServeHTTP(rw http.ResponseWriter, r *http.Request) { + if r.Method != http.MethodGet { + h.ServeHTTP(rw, r) + return + } + + cr := h.cache.Get(h.CacheKey(r), func() (interface{}, time.Duration, int) { + crw := &cachedResponseWriter{ + w: rw, + statusCode: 200, + buf: bytes.Buffer{}, + } + + h.fetcher.ServeHTTP(crw, r) + + cr := &cachedResponse{ + headers: rw.Header().Clone(), + statusCode: crw.statusCode, + data: crw.buf.Bytes(), + fetched: time.Now(), + } + cr.headers.Set("Content-Length", strconv.Itoa(len(cr.data))) + + ttl := h.defaultTTL + if cr.statusCode != http.StatusOK { + ttl = 0 + } else if cr.headers.Get("Expires") != "" { + if expires, err := http.ParseTime(cr.headers.Get("Expires")); err == nil { + ttl = time.Until(expires) + } + } + + return cr, ttl, len(cr.data) + }).(*cachedResponse) + + for key, val := range cr.headers { + rw.Header()[key] = val + } + + cr.headers.Set("Age", strconv.Itoa(int(time.Since(cr.fetched).Seconds()))) + + rw.WriteHeader(cr.statusCode) + rw.Write(cr.data) +} diff --git a/pkg/lrucache/handler_test.go b/pkg/lrucache/handler_test.go new file mode 100644 index 0000000..cb05f31 --- /dev/null +++ b/pkg/lrucache/handler_test.go @@ -0,0 +1,71 @@ +package lrucache + +import ( + "bytes" + "net/http" + "net/http/httptest" + "testing" + "time" +) + +func TestHandlerBasics(t *testing.T) { + r := httptest.NewRequest(http.MethodGet, "/test1", nil) + rw := httptest.NewRecorder() + shouldBeCalled := true + + handler := NewHttpHandler(1000, time.Second, http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) { + rw.Write([]byte("Hello World!")) + + if !shouldBeCalled { + t.Fatal("fetcher expected to be called") + } + })) + + handler.ServeHTTP(rw, r) + + if rw.Code != 200 { + t.Fatal("unexpected status code") + } + + if !bytes.Equal(rw.Body.Bytes(), []byte("Hello World!")) { + t.Fatal("unexpected body") + } + + rw = httptest.NewRecorder() + shouldBeCalled = false + handler.ServeHTTP(rw, r) + + if rw.Code != 200 { + t.Fatal("unexpected status code") + } + + if !bytes.Equal(rw.Body.Bytes(), []byte("Hello World!")) { + t.Fatal("unexpected body") + } +} + +// func TestHandlerExpiration(t *testing.T) { +// r := httptest.NewRequest(http.MethodGet, "/test1", nil) +// rw := httptest.NewRecorder() +// i := 1 +// now := time.Now() + +// handler := NewHttpHandler(1000, 1*time.Second, http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) { +// rw.Header().Set("Expires", now.Add(10*time.Millisecond).Format(http.TimeFormat)) +// rw.Write([]byte(strconv.Itoa(i))) +// })) + +// handler.ServeHTTP(rw, r) +// if !(rw.Body.String() == strconv.Itoa(1)) { +// t.Fatal("unexpected body") +// } + +// i += 1 + +// time.Sleep(11 * time.Millisecond) +// rw = httptest.NewRecorder() +// handler.ServeHTTP(rw, r) +// if !(rw.Body.String() == strconv.Itoa(1)) { +// t.Fatal("unexpected body") +// } +// } diff --git a/schema/float.go b/pkg/schema/float.go similarity index 100% rename from schema/float.go rename to pkg/schema/float.go diff --git a/schema/job.go b/pkg/schema/job.go similarity index 100% rename from schema/job.go rename to pkg/schema/job.go diff --git a/schema/metrics.go b/pkg/schema/metrics.go similarity index 100% rename from schema/metrics.go rename to pkg/schema/metrics.go diff --git a/startDemo.sh b/startDemo.sh index aa82bc7..6e3cf7b 100755 --- a/startDemo.sh +++ b/startDemo.sh @@ -8,13 +8,13 @@ tar xJf job-archive.tar.xz rm ./job-archive.tar.xz touch ./job.db -cd ../frontend +cd ../web/frontend yarn install yarn build -cd .. +cd ../.. +cp ./configs/env-template.txt .env go get -go build +go build ./cmd/cc-backend -./cc-backend --init-db --add-user demo:admin:AdminDev --no-server -./cc-backend +./cc-backend --init-db --add-user demo:admin:AdminDev diff --git a/test/api_test.go b/test/api_test.go index ca73bad..816cd87 100644 --- a/test/api_test.go +++ b/test/api_test.go @@ -4,7 +4,6 @@ import ( "bytes" "context" "encoding/json" - "fmt" "net/http" "net/http/httptest" "os" @@ -14,14 +13,13 @@ import ( "strings" "testing" - "github.com/ClusterCockpit/cc-backend/api" - "github.com/ClusterCockpit/cc-backend/config" - "github.com/ClusterCockpit/cc-backend/graph" - "github.com/ClusterCockpit/cc-backend/metricdata" - "github.com/ClusterCockpit/cc-backend/repository" - "github.com/ClusterCockpit/cc-backend/schema" + "github.com/ClusterCockpit/cc-backend/internal/api" + "github.com/ClusterCockpit/cc-backend/internal/config" + "github.com/ClusterCockpit/cc-backend/internal/graph" + "github.com/ClusterCockpit/cc-backend/internal/metricdata" + "github.com/ClusterCockpit/cc-backend/internal/repository" + "github.com/ClusterCockpit/cc-backend/pkg/schema" "github.com/gorilla/mux" - "github.com/jmoiron/sqlx" _ "github.com/mattn/go-sqlite3" ) @@ -95,17 +93,14 @@ func setup(t *testing.T) *api.RestApi { } f.Close() - db, err := sqlx.Open("sqlite3", fmt.Sprintf("%s?_foreign_keys=on", dbfilepath)) - if err != nil { + repository.Connect("sqlite3", dbfilepath) + db := repository.GetConnection() + + if _, err := db.DB.Exec(repository.JobsDBSchema); err != nil { t.Fatal(err) } - db.SetMaxOpenConns(1) - if _, err := db.Exec(repository.JobsDBSchema); err != nil { - t.Fatal(err) - } - - if err := config.Init(db, false, map[string]interface{}{}, jobarchive); err != nil { + if err := config.Init(db.DB, false, map[string]interface{}{}, jobarchive); err != nil { t.Fatal(err) } @@ -113,10 +108,8 @@ func setup(t *testing.T) *api.RestApi { t.Fatal(err) } - resolver := &graph.Resolver{DB: db, Repo: &repository.JobRepository{DB: db}} - if err := resolver.Repo.Init(); err != nil { - t.Fatal(err) - } + jobRepo := repository.GetRepository() + resolver := &graph.Resolver{DB: db.DB, Repo: jobRepo} return &api.RestApi{ JobRepository: resolver.Repo, diff --git a/tools/README.md b/tools/README.md new file mode 100644 index 0000000..76a4537 --- /dev/null +++ b/tools/README.md @@ -0,0 +1,46 @@ +## Introduction + +ClusterCockpit uses JSON Web Tokens (JWT) for authorization of its APIs. +JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. +This information can be verified and trusted because it is digitally signed. +In ClusterCockpit JWTs are signed using a public/private key pair using ECDSA. +Because tokens are signed using public/private key pairs, the signature also certifies that only the party holding the private key is the one that signed it. +Currently JWT tokens not yet expire. + +## JWT Payload + +You may view the payload of a JWT token at [https://jwt.io/#debugger-io](https://jwt.io/#debugger-io). +Currently ClusterCockpit sets the following claims: +* `iat`: Issued at claim. The “iat” claim is used to identify the the time at which the JWT was issued. This claim can be used to determine the age of the JWT. +* `sub`: Subject claim. Identifies the subject of the JWT, in our case this is the username. +* `roles`: An array of strings specifying the roles set for the subject. + +## Workflow + +1. Create a new ECDSA Public/private keypair: +``` +$ go build ./tools/gen-keypair.go +$ ./gen-keypair +``` +2. Add keypair in your `.env` file. A template can be found in `./configs`. + +There are two usage scenarios: +* The APIs are used during a browser session. In this case on login a JWT token is issued on login, that is used by the web frontend to authorize against the GraphQL and REST APIs. +* The REST API is used outside a browser session, e.g. by scripts. In this case you have to issue a token manually. This possible from within the configuration view or on the command line. It is recommended to issue a JWT token in this case for a special user that only has the `api` role. By using different users for different purposes a fine grained access control and access revocation management is possible. + +The token is commonly specified in the Authorization HTTP header using the Bearer schema. + +## Setup user and JWT token for REST API authorization + +1. Create user: +``` +$ ./cc-backend --add-user :api: --no-server +``` +2. Issue token for user: +``` +$ ./cc-backend -jwt -no-server +``` +3. Use issued token token on client side: +``` +$ curl -X GET "" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer " +``` diff --git a/tools/gen-keypair.go b/tools/gen-keypair.go new file mode 100644 index 0000000..905817d --- /dev/null +++ b/tools/gen-keypair.go @@ -0,0 +1,22 @@ +package main + +import ( + "crypto/ed25519" + "crypto/rand" + "encoding/base64" + "fmt" + "os" +) + +func main() { + // rand.Reader uses /dev/urandom on Linux + pub, priv, err := ed25519.GenerateKey(rand.Reader) + if err != nil { + fmt.Fprintf(os.Stderr, "error: %s\n", err.Error()) + os.Exit(1) + } + + fmt.Fprintf(os.Stdout, "JWT_PUBLIC_KEY=%#v\nJWT_PRIVATE_KEY=%#v\n", + base64.StdEncoding.EncodeToString(pub), + base64.StdEncoding.EncodeToString(priv)) +} diff --git a/web/frontend/README.md b/web/frontend/README.md new file mode 100644 index 0000000..4d54384 --- /dev/null +++ b/web/frontend/README.md @@ -0,0 +1,31 @@ +# cc-svelte-datatable + +[![Build](https://github.com/ClusterCockpit/cc-svelte-datatable/actions/workflows/build.yml/badge.svg)](https://github.com/ClusterCockpit/cc-svelte-datatable/actions/workflows/build.yml) + +A frontend for [ClusterCockpit](https://github.com/ClusterCockpit/ClusterCockpit) and [cc-backend](https://github.com/ClusterCockpit/cc-backend). Backend specific configuration can de done using the constants defined in the `intro` section in `./rollup.config.js`. + +Builds on: +* [Svelte](https://svelte.dev/) +* [SvelteStrap](https://sveltestrap.js.org/) +* [Bootstrap 5](https://getbootstrap.com/) +* [urql](https://github.com/FormidableLabs/urql) + +## Get started + +[Yarn](https://yarnpkg.com/) is recommended for package management. +Due to an issue with Yarn v2 you have to stick to Yarn v1. + +Install the dependencies... + +```bash +yarn install +``` + +...then start [Rollup](https://rollupjs.org): + +```bash +yarn run dev +``` + +Edit a component file in `src`, save it, and reload the page to see your changes. + diff --git a/web/frontend/package.json b/web/frontend/package.json new file mode 100644 index 0000000..2f2ab55 --- /dev/null +++ b/web/frontend/package.json @@ -0,0 +1,25 @@ +{ + "name": "svelte-app", + "version": "1.0.0", + "scripts": { + "build": "rollup -c", + "dev": "rollup -c -w" + }, + "devDependencies": { + "@rollup/plugin-commonjs": "^17.0.0", + "@rollup/plugin-node-resolve": "^11.0.0", + "rollup": "^2.3.4", + "rollup-plugin-css-only": "^3.1.0", + "rollup-plugin-svelte": "^7.0.0", + "rollup-plugin-terser": "^7.0.0", + "svelte": "^3.42.6" + }, + "dependencies": { + "@rollup/plugin-replace": "^2.4.1", + "@urql/svelte": "^1.3.0", + "graphql": "^15.6.0", + "sveltestrap": "^5.6.1", + "uplot": "^1.6.7", + "wonka": "^4.0.15" + } +} diff --git a/web/frontend/public/favicon.png b/web/frontend/public/favicon.png new file mode 100644 index 0000000..fa7bf5c Binary files /dev/null and b/web/frontend/public/favicon.png differ diff --git a/web/frontend/public/global.css b/web/frontend/public/global.css new file mode 100644 index 0000000..8feecf6 --- /dev/null +++ b/web/frontend/public/global.css @@ -0,0 +1,54 @@ +html, body { + position: relative; + width: 100%; + height: 100%; +} + +body { + color: #333; + margin: 0; + padding: 8px; + box-sizing: border-box; + font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; +} + +.container { + max-width: 100vw; +} + +.site { + display: flex; + flex-direction: column; + height: 100%; +} + +.site-content { + flex: 1 0 auto; + margin-top: 80px; +} + +.site-footer { + flex: none; +} + +footer { + width: 100%; + padding: 0.1rem 1.0rem; + line-height: 1.5; +} + +.footer-list { + list-style-type: none; + padding-left: 0; + width: 100%; + display: flex; + flex-wrap: wrap; + justify-content: center; + margin-top: 5px; + margin-bottom: 5px; +} + +.footer-list-item { + margin: 0rem 0.8rem; + white-space: nowrap; +} diff --git a/web/frontend/public/img/logo.png b/web/frontend/public/img/logo.png new file mode 100644 index 0000000..2ad4fd6 Binary files /dev/null and b/web/frontend/public/img/logo.png differ diff --git a/web/frontend/public/uPlot.min.css b/web/frontend/public/uPlot.min.css new file mode 120000 index 0000000..b11d327 --- /dev/null +++ b/web/frontend/public/uPlot.min.css @@ -0,0 +1 @@ +../node_modules/uplot/dist/uPlot.min.css \ No newline at end of file diff --git a/web/frontend/rollup.config.js b/web/frontend/rollup.config.js new file mode 100644 index 0000000..13d988a --- /dev/null +++ b/web/frontend/rollup.config.js @@ -0,0 +1,70 @@ +import svelte from 'rollup-plugin-svelte'; +import replace from "@rollup/plugin-replace"; +import commonjs from '@rollup/plugin-commonjs'; +import resolve from '@rollup/plugin-node-resolve'; +import { terser } from 'rollup-plugin-terser'; +import css from 'rollup-plugin-css-only'; + +const production = !process.env.ROLLUP_WATCH; + +const plugins = [ + svelte({ + compilerOptions: { + // enable run-time checks when not in production + dev: !production + } + }), + + // If you have external dependencies installed from + // npm, you'll most likely need these plugins. In + // some cases you'll need additional configuration - + // consult the documentation for details: + // https://github.com/rollup/plugins/tree/master/packages/commonjs + resolve({ + browser: true, + dedupe: ['svelte'] + }), + commonjs(), + + // If we're building for production (npm run build + // instead of npm run dev), minify + production && terser(), + + replace({ + "process.env.NODE_ENV": JSON.stringify("development"), + preventAssignment: true + }) +]; + +const entrypoint = (name, path) => ({ + input: path, + output: { + sourcemap: false, + format: 'iife', + name: 'app', + file: `public/build/${name}.js` + }, + plugins: [ + ...plugins, + + // we'll extract any component CSS out into + // a separate file - better for performance + css({ output: `${name}.css` }), + ], + watch: { + clearScreen: false + } +}); + +export default [ + entrypoint('header', 'src/header.entrypoint.js'), + entrypoint('jobs', 'src/jobs.entrypoint.js'), + entrypoint('user', 'src/user.entrypoint.js'), + entrypoint('list', 'src/list.entrypoint.js'), + entrypoint('job', 'src/job.entrypoint.js'), + entrypoint('systems', 'src/systems.entrypoint.js'), + entrypoint('node', 'src/node.entrypoint.js'), + entrypoint('analysis', 'src/analysis.entrypoint.js'), + entrypoint('status', 'src/status.entrypoint.js') +]; + diff --git a/web/frontend/src/Analysis.root.svelte b/web/frontend/src/Analysis.root.svelte new file mode 100644 index 0000000..a92aea7 --- /dev/null +++ b/web/frontend/src/Analysis.root.svelte @@ -0,0 +1,265 @@ + + + + {#if $initq.fetching || $statsQuery.fetching || $footprintsQuery.fetching} + + + + {/if} + + {#if $initq.error} + {$initq.error.message} + {:else if cluster} + mc.name)} + bind:metricsInHistograms={metricsInHistograms} + bind:metricsInScatterplots={metricsInScatterplots} /> + {/if} + + + { + $statsQuery.context.pause = false + $statsQuery.variables = { filter: detail.filters } + $footprintsQuery.context.pause = false + $footprintsQuery.variables = { metrics, filter: detail.filters } + $rooflineQuery.variables = { ...$rooflineQuery.variables, filter: detail.filters } + }} /> + + + +
+{#if $statsQuery.error} + + + {$statsQuery.error.message} + + +{:else if $statsQuery.data} + +
+
+ + + + + + + + + + + + + + + + + +
Total Jobs{$statsQuery.data.stats[0].totalJobs}
Short Jobs (< 2m){$statsQuery.data.stats[0].shortJobs}
Total Walltime{$statsQuery.data.stats[0].totalWalltime}
Total Core Hours{$statsQuery.data.stats[0].totalCoreHours}
+
+
+ {#key $statsQuery.data.topUsers} +

Top Users (by node hours)

+ b.count - a.count).map(({ count }, idx) => ({ count, value: idx }))} + label={(x) => x < $statsQuery.data.topUsers.length ? $statsQuery.data.topUsers[Math.floor(x)].name : '0'} /> + {/key} +
+
+
+ {#key $statsQuery.data.stats[0].histDuration} +

Walltime Distribution

+ + {/key} +
+
+ {#key $statsQuery.data.stats[0].histNumNodes} +

Number of Nodes Distribution

+ + {/key} +
+
+ {#if $rooflineQuery.fetching} + + {:else if $rooflineQuery.error} + {$rooflineQuery.error.message} + {:else if $rooflineQuery.data && cluster} + {#key $rooflineQuery.data} + + {/key} + {/if} +
+
+{/if} + +
+{#if $footprintsQuery.error} + + + {$footprintsQuery.error.message} + + +{:else if $footprintsQuery.data && $initq.data} + + + + These histograms show the distribution of the averages of all jobs matching the filters. Each job/average is weighted by its node hours. + +
+ +
+ + + ({ metric, ...binsFromFootprint( + $footprintsQuery.data.footprints.nodehours, + $footprintsQuery.data.footprints.metrics.find(f => f.metric == metric).data, numBins) }))} + itemsPerRow={ccconfig.plot_view_plotsPerRow}> +

{item.metric} [{metricConfig(cluster.name, item.metric)?.unit}]

+ + +
+ +
+
+ + + + Each circle represents one job. The size of a circle is proportional to its node hours. Darker circles mean multiple jobs have the same averages for the respective metrics. + +
+ +
+ + + ({ + m1, f1: $footprintsQuery.data.footprints.metrics.find(f => f.metric == m1).data, + m2, f2: $footprintsQuery.data.footprints.metrics.find(f => f.metric == m2).data }))} + itemsPerRow={ccconfig.plot_view_plotsPerRow}> + + + + + +{/if} + + diff --git a/web/frontend/src/Header.svelte b/web/frontend/src/Header.svelte new file mode 100644 index 0000000..f99956a --- /dev/null +++ b/web/frontend/src/Header.svelte @@ -0,0 +1,73 @@ + + + + + ClusterCockpit Logo + + (isOpen = !isOpen)} /> + (isOpen = detail.isOpen)}> + + +
+
+ + + + +
+ {#if username} +
+ +
+ {/if} + +
+
diff --git a/web/frontend/src/Job.root.svelte b/web/frontend/src/Job.root.svelte new file mode 100644 index 0000000..58c0d56 --- /dev/null +++ b/web/frontend/src/Job.root.svelte @@ -0,0 +1,224 @@ + + +
+ + + {#if $initq.error} + {$initq.error.message} + {:else if $initq.data} + + {:else} + + {/if} + + {#if $jobMetrics.data && $initq.data} + + + + + c.name == $initq.data.job.cluster).subClusters + .find(sc => sc.name == $initq.data.job.subCluster)} + flopsAny={$jobMetrics.data.jobMetrics.find(m => m.name == 'flops_any' && m.metric.scope == 'node').metric} + memBw={$jobMetrics.data.jobMetrics.find(m => m.name == 'mem_bw' && m.metric.scope == 'node').metric} /> + + {:else} + + + {/if} + +
+ + + {#if $initq.data} + + {/if} + + + {#if $initq.data} + + {/if} + + + + + +
+ + + {#if $jobMetrics.error} + {#if $initq.data.job.monitoringStatus == 0 || $initq.data.job.monitoringStatus == 2} + Not monitored or archiving failed +
+ {/if} + {$jobMetrics.error.message} + {:else if $jobMetrics.fetching} + + {:else if $jobMetrics.data && $initq.data} + + {#if item.data} + statsTable.moreLoaded(detail)} + job={$initq.data.job} + metric={item.metric} + scopes={item.data.map(x => x.metric)} + width={width}/> + {:else} + No data for {item.metric} + {/if} + + {/if} + +
+
+ + + {#if $initq.data} + + {#if somethingMissing} + +
+ + Missing Metrics/Reseources + + + {#if missingMetrics.length > 0} +

No data at all is available for the metrics: {missingMetrics.join(', ')}

+ {/if} + {#if missingHosts.length > 0} +

Some metrics are missing for the following hosts:

+
    + {#each missingHosts as missing} +
  • {missing.hostname}: {missing.metrics.join(', ')}
  • + {/each} +
+ {/if} +
+
+
+ {/if} + + {#if $jobMetrics.data} + + {/if} + + +
+ {#if $initq.data.job.metaData?.jobScript} +
{$initq.data.job.metaData?.jobScript}
+ {:else} + No job script available + {/if} +
+
+ +
+ {#if $initq.data.job.metaData?.slurmInfo} +
{$initq.data.job.metaData?.slurmInfo}
+ {:else} + No additional slurm information available + {/if} +
+
+
+ {/if} + +
+ +{#if $initq.data} + +{/if} + + diff --git a/web/frontend/src/Jobs.root.svelte b/web/frontend/src/Jobs.root.svelte new file mode 100644 index 0000000..9ecaafa --- /dev/null +++ b/web/frontend/src/Jobs.root.svelte @@ -0,0 +1,88 @@ + + + + {#if $initq.fetching} + + + + {:else if $initq.error} + + {$initq.error.message} + + {/if} + + + + + + + + + jobList.update(detail.filters)} /> + + + + filters.update(detail)}/> + + + jobList.update()} /> + + +
+ + + + + + + + + diff --git a/web/frontend/src/List.root.svelte b/web/frontend/src/List.root.svelte new file mode 100644 index 0000000..7d973e4 --- /dev/null +++ b/web/frontend/src/List.root.svelte @@ -0,0 +1,151 @@ + + + + + + + + + + + + { + $stats.variables = { filter: detail.filters } + $stats.context.pause = false + $stats.reexecute() + }} /> + + + + + + + + + + + + + {#if $stats.fetching} + + + + {:else if $stats.error} + + + + {:else if $stats.data} + {#each sort($stats.data.rows, sorting, nameFilter) as row (row.id)} + + + + + + + {:else} + + + + {/each} + {/if} + +
+ {({ USER: 'Username', PROJECT: 'Project Name' })[type]} + + + Total Jobs + + + Total Walltime + + + Total Core Hours + +
{$stats.error.message}
+ {#if type == 'USER'} + {scrambleNames ? scramble(row.id) : row.id} + {:else if type == 'PROJECT'} + {row.id} + {:else} + {row.id} + {/if} + {row.totalJobs}{row.totalWalltime}{row.totalCoreHours}
No {type.toLowerCase()}s/jobs found
\ No newline at end of file diff --git a/web/frontend/src/Metric.svelte b/web/frontend/src/Metric.svelte new file mode 100644 index 0000000..f414827 --- /dev/null +++ b/web/frontend/src/Metric.svelte @@ -0,0 +1,88 @@ + + + + {metric} ({metricConfig?.unit}) + + + {#if job.resources.length > 1} + + {/if} + +{#key series} + {#if fetching == true} + + {:else if error != null} + {error.message} + {:else if series != null} + + {/if} +{/key} diff --git a/web/frontend/src/MetricSelection.svelte b/web/frontend/src/MetricSelection.svelte new file mode 100644 index 0000000..4119256 --- /dev/null +++ b/web/frontend/src/MetricSelection.svelte @@ -0,0 +1,126 @@ + + + + + + + (isOpen = !isOpen)}> + + Configure columns + + + + {#each newMetricsOrder as metric, index (metric)} +
  • columnsDragStart(event, index)} + on:drop|preventDefault={event => columnsDrag(event, index)} + on:dragenter={() => columnHovering = index} + class:is-active={columnHovering === index}> + {#if unorderedMetrics.includes(metric)} + + {:else} + + {/if} + {metric} + + {cluster == null ? clusters + .filter(cluster => cluster.metricConfig.find(m => m.name == metric) != null) + .map(cluster => cluster.name).join(', ') : ''} + +
  • + {/each} +
    +
    + + + +
    diff --git a/web/frontend/src/Node.root.svelte b/web/frontend/src/Node.root.svelte new file mode 100644 index 0000000..9534bd7 --- /dev/null +++ b/web/frontend/src/Node.root.svelte @@ -0,0 +1,94 @@ + + + + {#if $initq.error} + {$initq.error.message} + {:else if $initq.fetching} + + {:else} + + + + {hostname} ({cluster}) + + + + + + {/if} + +
    + + + {#if $nodesQuery.error} + {$nodesQuery.error.message} + {:else if $nodesQuery.fetching || $initq.fetching} + + {:else} + a.name.localeCompare(b.name))}> +

    {item.name}

    + c.name == cluster)} subCluster={$nodesQuery.data.nodeMetrics[0].subCluster} + series={item.metric.series} /> +
    + {/if} + +
    diff --git a/web/frontend/src/PlotSelection.svelte b/web/frontend/src/PlotSelection.svelte new file mode 100644 index 0000000..0205c27 --- /dev/null +++ b/web/frontend/src/PlotSelection.svelte @@ -0,0 +1,133 @@ + + + + + + + (isHistogramConfigOpen = !isHistogramConfigOpen)}> + + Select metrics presented in histograms + + + + {#each availableMetrics as metric (metric)} + + updateConfiguration({ + name: 'analysis_view_histogramMetrics', + value: metricsInHistograms + })} /> + + {metric} + + {/each} + + + + + + + + (isScatterPlotConfigOpen = !isScatterPlotConfigOpen)}> + + Select metric pairs presented in scatter plots + + + + {#each metricsInScatterplots as pair} + + {pair[0]} / {pair[1]} + + + + {/each} + + +
    + + + + + + + +
    + + + +
    diff --git a/web/frontend/src/PlotTable.svelte b/web/frontend/src/PlotTable.svelte new file mode 100644 index 0000000..208c4af --- /dev/null +++ b/web/frontend/src/PlotTable.svelte @@ -0,0 +1,50 @@ + + + + + + {#each rows as row} + + {#each row as item (item)} + + {/each} + + {/each} +
    + {#if item != PLACEHOLDER && plotWidth > 0} + + {/if} +
    diff --git a/web/frontend/src/StatsTable.svelte b/web/frontend/src/StatsTable.svelte new file mode 100644 index 0000000..e9400ac --- /dev/null +++ b/web/frontend/src/StatsTable.svelte @@ -0,0 +1,122 @@ + + + + + + + {#each selectedMetrics as metric} + + {/each} + + + + {#each selectedMetrics as metric} + {#if selectedScopes[metric] != 'node'} + + {/if} + {#each ['min', 'avg', 'max'] as stat} + + {/each} + {/each} + + + + {#each hosts as host (host)} + + + {#each selectedMetrics as metric (metric)} + + {/each} + + {/each} + +
    + + + + + {metric} + + + +
    NodeId sortBy(metric, stat)}> + {stat} + {#if selectedScopes[metric] == 'node'} + + {/if} +
    {host}
    + +
    + + diff --git a/web/frontend/src/StatsTableEntry.svelte b/web/frontend/src/StatsTableEntry.svelte new file mode 100644 index 0000000..93cd9f0 --- /dev/null +++ b/web/frontend/src/StatsTableEntry.svelte @@ -0,0 +1,37 @@ + + +{#if series == null || series.length == 0} + No data +{:else if series.length == 1 && scope == 'node'} + + {series[0].statistics.min} + + + {series[0].statistics.avg} + + + {series[0].statistics.max} + +{:else} + + + {#each series as s, i} + + + + + + + {/each} +
    {s.id ?? i}{s.statistics.min}{s.statistics.avg}{s.statistics.max}
    + +{/if} diff --git a/web/frontend/src/Status.root.svelte b/web/frontend/src/Status.root.svelte new file mode 100644 index 0000000..26842c8 --- /dev/null +++ b/web/frontend/src/Status.root.svelte @@ -0,0 +1,184 @@ + + + + + {#if $initq.fetching || $mainQuery.fetching} + + {:else if $initq.error} + {$initq.error.message} + {:else} + + {/if} + + + { + console.log('reload...') + + from = new Date(Date.now() - 5 * 60 * 1000) + to = new Date(Date.now()) + + $mainQuery.variables = { ...$mainQuery.variables, from: from, to: to } + $mainQuery.reexecute({ requestPolicy: 'network-only' }) + }} /> + + +{#if $mainQuery.error} + + + {$mainQuery.error.message} + + +{/if} +{#if $initq.data && $mainQuery.data} + {#each $initq.data.clusters.find(c => c.name == cluster).subClusters as subCluster, i} + + + + + + + + + + + + + + + + + + + + + + +
    SubCluster{subCluster.name}
    Allocated Nodes
    ({allocatedNodes[subCluster.name]} / {subCluster.numberOfNodes})
    Flop Rate
    ({flopRate[subCluster.name]} / {subCluster.flopRateSimd * subCluster.numberOfNodes})
    MemBw Rate
    ({memBwRate[subCluster.name]} / {subCluster.memoryBandwidth * subCluster.numberOfNodes})
    + +
    + {#key $mainQuery.data.nodeMetrics} + data.subCluster == subCluster.name))} /> + {/key} +
    +
    + {/each} + +
    +

    Top Users

    + {#key $mainQuery.data} + b.count - a.count).map(({ count }, idx) => ({ count, value: idx }))} + label={(x) => x < $mainQuery.data.topUsers.length ? $mainQuery.data.topUsers[Math.floor(x)].name : '0'} /> + {/key} +
    +
    + + + {#each $mainQuery.data.topUsers.sort((a, b) => b.count - a.count) as { name, count }} + + + + + {/each} +
    NameNumber of Nodes
    {name}{count}
    +
    +
    +

    Top Projects

    + {#key $mainQuery.data} + b.count - a.count).map(({ count }, idx) => ({ count, value: idx }))} + label={(x) => x < $mainQuery.data.topProjects.length ? $mainQuery.data.topProjects[Math.floor(x)].name : '0'} /> + {/key} +
    +
    + + + {#each $mainQuery.data.topProjects.sort((a, b) => b.count - a.count) as { name, count }} + + {/each} +
    NameNumber of Nodes
    {name}{count}
    +
    +
    + +
    +

    Duration Distribution

    + {#key $mainQuery.data.stats} + + {/key} +
    +
    +

    Number of Nodes Distribution

    + {#key $mainQuery.data.stats} + + {/key} +
    +
    +{/if} diff --git a/web/frontend/src/Systems.root.svelte b/web/frontend/src/Systems.root.svelte new file mode 100644 index 0000000..fc2db8b --- /dev/null +++ b/web/frontend/src/Systems.root.svelte @@ -0,0 +1,118 @@ + + + + {#if $initq.error} + {$initq.error.message} + {:else if $initq.fetching} + + {:else} + + + + + + + Metric + + + + + + + Find Node + + + + {/if} + +
    + + + {#if $nodesQuery.error} + {$nodesQuery.error.message} + {:else if $nodesQuery.fetching || $initq.fetching} + + {:else} + h.host.includes(hostnameFilter) && h.metrics.some(m => m.name == selectedMetric && m.metric.scope == 'node')) + .map(h => ({ host: h.host, subCluster: h.subCluster, data: h.metrics.find(m => m.name == selectedMetric && m.metric.scope == 'node') })) + .sort((a, b) => a.host.localeCompare(b.host))}> + +

    {item.host} ({item.subCluster})

    + c.name == cluster)} + subCluster={item.subCluster} /> +
    + {/if} + +
    + diff --git a/web/frontend/src/Tag.svelte b/web/frontend/src/Tag.svelte new file mode 100644 index 0000000..76a94ec --- /dev/null +++ b/web/frontend/src/Tag.svelte @@ -0,0 +1,44 @@ + + + + + + + + {#if tag} + {tag.type}: {tag.name} + {:else} + Loading... + {/if} + diff --git a/web/frontend/src/TagManagement.svelte b/web/frontend/src/TagManagement.svelte new file mode 100644 index 0000000..747b092 --- /dev/null +++ b/web/frontend/src/TagManagement.svelte @@ -0,0 +1,173 @@ + + + + + (isOpen = !isOpen)}> + + Manage Tags + {#if pendingChange !== false} + + {:else} + + {/if} + + + + +
    + + + Search using "type: name". If no tag matches your search, + a button for creating a new one will appear. + + +
      + {#each allTagsFiltered as tag} + + + + + {#if pendingChange === tag.id} + + {:else if job.tags.find(t => t.id == tag.id)} + + {:else} + + {/if} + + + {:else} + + No tags matching + + {/each} +
    +
    + {#if newTagType && newTagName && isNewTag(newTagType, newTagName)} + + {:else if allTagsFiltered.length == 0} + Search Term is not a valid Tag (type: name) + {/if} +
    + + + +
    + + diff --git a/web/frontend/src/User.root.svelte b/web/frontend/src/User.root.svelte new file mode 100644 index 0000000..5a8d14d --- /dev/null +++ b/web/frontend/src/User.root.svelte @@ -0,0 +1,172 @@ + + + + {#if $initq.fetching} + + + + {:else if $initq.error} + + {$initq.error.message} + + {/if} + + + + + + + + { + let filters = [...detail.filters, { user: { eq: user.username } }] + $stats.variables = { filter: filters } + $stats.context.pause = false + $stats.reexecute() + jobList.update(filters) + }} /> + + + jobList.update()} /> + + +
    + + {#if $stats.error} + + {$stats.error.message} + + {:else if !$stats.data} + + + + {:else} + + + + + + + + {#if user.name} + + + + + {/if} + {#if user.email} + + + + + {/if} + + + + + + + + + + + + + + + + + +
    Username{scrambleNames ? scramble(user.username) : user.username}
    Name{scrambleNames ? scramble(user.name) : user.name}
    Email{user.email}
    Total Jobs{$stats.data.jobsStatistics[0].totalJobs}
    Short Jobs{$stats.data.jobsStatistics[0].shortJobs}
    Total Walltime{$stats.data.jobsStatistics[0].totalWalltime}
    Total Core Hours{$stats.data.jobsStatistics[0].totalCoreHours}
    + +
    + Walltime + {#key $stats.data.jobsStatistics[0].histDuration} + + {/key} +
    +
    + Number of Nodes + {#key $stats.data.jobsStatistics[0].histNumNodes} + + {/key} +
    + {/if} +
    +
    + + + + + + + + + \ No newline at end of file diff --git a/web/frontend/src/Zoom.svelte b/web/frontend/src/Zoom.svelte new file mode 100644 index 0000000..ae842fc --- /dev/null +++ b/web/frontend/src/Zoom.svelte @@ -0,0 +1,60 @@ + + +
    + + + + + + Window Size: + + + ({windowSize}%) + + + + Window Position: + + + +
    diff --git a/web/frontend/src/analysis.entrypoint.js b/web/frontend/src/analysis.entrypoint.js new file mode 100644 index 0000000..d889144 --- /dev/null +++ b/web/frontend/src/analysis.entrypoint.js @@ -0,0 +1,14 @@ +import {} from './header.entrypoint.js' +import Analysis from './Analysis.root.svelte' + +filterPresets.cluster = cluster + +new Analysis({ + target: document.getElementById('svelte-app'), + props: { + filterPresets: filterPresets + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/cache-exchange.js b/web/frontend/src/cache-exchange.js new file mode 100644 index 0000000..c52843e --- /dev/null +++ b/web/frontend/src/cache-exchange.js @@ -0,0 +1,72 @@ +import { filter, map, merge, pipe, share, tap } from 'wonka'; + +/* + * Alternative to the default cacheExchange from urql (A GraphQL client). + * Mutations do not invalidate cached results, so in that regard, this + * implementation is inferior to the default one. Most people should probably + * use the standard cacheExchange and @urql/exchange-request-policy. This cache + * also ignores the 'network-and-cache' request policy. + * + * Options: + * ttl: How long queries are allowed to be cached (in milliseconds) + * maxSize: Max number of results cached. The oldest queries are removed first. + */ +export const expiringCacheExchange = ({ ttl, maxSize }) => ({ forward }) => { + const cache = new Map(); + const isCached = (operation) => { + if (operation.kind !== 'query' || operation.context.requestPolicy === 'network-only') + return false; + + if (!cache.has(operation.key)) + return false; + + let cacheEntry = cache.get(operation.key); + return Date.now() < cacheEntry.expiresAt; + }; + + return operations => { + let shared = share(operations); + return merge([ + pipe( + shared, + filter(operation => isCached(operation)), + map(operation => cache.get(operation.key).response) + ), + pipe( + shared, + filter(operation => !isCached(operation)), + forward, + tap(response => { + if (!response.operation || response.operation.kind !== 'query') + return; + + if (!response.data) + return; + + let now = Date.now(); + for (let cacheEntry of cache.values()) { + if (cacheEntry.expiresAt < now) { + cache.delete(cacheEntry.response.operation.key); + } + } + + if (cache.size > maxSize) { + let n = cache.size - maxSize + 1; + for (let key of cache.keys()) { + if (n-- == 0) + break; + + cache.delete(key); + } + } + + cache.set(response.operation.key, { + expiresAt: now + ttl, + response: response + }); + }) + ) + ]); + }; +}; + diff --git a/web/frontend/src/filters/Cluster.svelte b/web/frontend/src/filters/Cluster.svelte new file mode 100644 index 0000000..83c4d91 --- /dev/null +++ b/web/frontend/src/filters/Cluster.svelte @@ -0,0 +1,77 @@ + + + (isOpen = !isOpen)}> + + Select Cluster & Slurm Partition + + + {#if $initialized} +

    Cluster

    + + (pendingCluster = null, pendingPartition = null)}> + Any Cluster + + {#each clusters as cluster} + (pendingCluster = cluster.name, pendingPartition = null)}> + {cluster.name} + + {/each} + + {/if} + {#if $initialized && pendingCluster != null} +
    +

    Partiton

    + + (pendingPartition = null)}> + Any Partition + + {#each clusters.find(c => c.name == pendingCluster).partitions as partition} + (pendingPartition = partition)}> + {partition} + + {/each} + + {/if} +
    + + + + + +
    diff --git a/web/frontend/src/filters/DoubleRangeSlider.svelte b/web/frontend/src/filters/DoubleRangeSlider.svelte new file mode 100644 index 0000000..aca460a --- /dev/null +++ b/web/frontend/src/filters/DoubleRangeSlider.svelte @@ -0,0 +1,302 @@ + + + + +
    +
    + inputChanged(0, e)} /> + + Full Range: {min} - {max} + + inputChanged(1, e)} /> +
    +
    +
    +
    +
    +
    +
    + + diff --git a/web/frontend/src/filters/Duration.svelte b/web/frontend/src/filters/Duration.svelte new file mode 100644 index 0000000..b482b9c --- /dev/null +++ b/web/frontend/src/filters/Duration.svelte @@ -0,0 +1,95 @@ + + + (isOpen = !isOpen)}> + + Select Start Time + + +

    Between

    + + +
    + +
    +
    h
    +
    +
    + + +
    + +
    +
    m
    +
    +
    + +
    +

    and

    + + +
    + +
    +
    h
    +
    +
    + + +
    + +
    +
    m
    +
    +
    + +
    +
    + + + + + +
    diff --git a/web/frontend/src/filters/Filters.svelte b/web/frontend/src/filters/Filters.svelte new file mode 100644 index 0000000..410f445 --- /dev/null +++ b/web/frontend/src/filters/Filters.svelte @@ -0,0 +1,323 @@ + + + + + + + + + Filters + + + + Manage Filters + + {#if menuText} + {menuText} + + {/if} + (isClusterOpen = true)}> + Cluster/Partition + + (isJobStatesOpen = true)}> + Job States + + (isStartTimeOpen = true)}> + Start Time + + (isDurationOpen = true)}> + Duration + + (isTagsOpen = true)}> + Tags + + (isResourcesOpen = true)}> + Nodes/Accelerators + + (isStatsOpen = true)}> + (isStatsOpen = true)}/> Statistics + + {#if startTimeQuickSelect} + + Start Time Qick Selection + {#each [ + { text: 'Last 6hrs', seconds: 6*60*60 }, + { text: 'Last 12hrs', seconds: 12*60*60 }, + { text: 'Last 24hrs', seconds: 24*60*60 }, + { text: 'Last 48hrs', seconds: 48*60*60 }, + { text: 'Last 7 days', seconds: 7*24*60*60 }, + { text: 'Last 30 days', seconds: 30*24*60*60 } + ] as {text, seconds}} + { + filters.startTime.from = (new Date(Date.now() - seconds * 1000)).toISOString() + filters.startTime.to = (new Date(Date.now())).toISOString() + update() + }}> + {text} + + {/each} + {/if} + + + + + + {#if filters.cluster} + (isClusterOpen = true)}> + {filters.cluster} + {#if filters.partition} + ({filters.partition}) + {/if} + + {/if} + + {#if filters.states.length != allJobStates.length} + (isJobStatesOpen = true)}> + {filters.states.join(', ')} + + {/if} + + {#if filters.startTime.from || filters.startTime.to} + (isStartTimeOpen = true)}> + {new Date(filters.startTime.from).toLocaleString()} - {new Date(filters.startTime.to).toLocaleString()} + + {/if} + + {#if filters.duration.from || filters.duration.to} + (isDurationOpen = true)}> + {Math.floor(filters.duration.from / 3600)}h:{Math.floor(filters.duration.from % 3600 / 60)}m + - + {Math.floor(filters.duration.to / 3600)}h:{Math.floor(filters.duration.to % 3600 / 60)}m + + {/if} + + {#if filters.tags.length != 0} + (isTagsOpen = true)}> + {#each filters.tags as tagId} + + {/each} + + {/if} + + {#if filters.numNodes.from != null || filters.numNodes.to != null} + (isResourcesOpen = true)}> + Nodes: {filters.numNodes.from} - {filters.numNodes.to} + + {/if} + + {#if filters.stats.length > 0} + (isStatsOpen = true)}> + {filters.stats.map(stat => `${stat.text}: ${stat.from} - ${stat.to}`).join(', ')} + + {/if} + + + + update()} /> + + update()} /> + + update()} /> + + update()} /> + + update()} /> + + update()} /> + + update()} /> + + diff --git a/web/frontend/src/filters/InfoBox.svelte b/web/frontend/src/filters/InfoBox.svelte new file mode 100644 index 0000000..58fc8a5 --- /dev/null +++ b/web/frontend/src/filters/InfoBox.svelte @@ -0,0 +1,11 @@ + + + diff --git a/web/frontend/src/filters/JobStates.svelte b/web/frontend/src/filters/JobStates.svelte new file mode 100644 index 0000000..4e5db2e --- /dev/null +++ b/web/frontend/src/filters/JobStates.svelte @@ -0,0 +1,47 @@ + + + + (isOpen = !isOpen)}> + + Select Job States + + + + {#each allJobStates as state} + + + {state} + + {/each} + + + + + + + + diff --git a/web/frontend/src/filters/Resources.svelte b/web/frontend/src/filters/Resources.svelte new file mode 100644 index 0000000..4f895b5 --- /dev/null +++ b/web/frontend/src/filters/Resources.svelte @@ -0,0 +1,99 @@ + + + (isOpen = !isOpen)}> + + Select Number of Nodes, HWThreads and Accelerators + + +

    Number of Nodes

    + (pendingNumNodes = { from: detail[0], to: detail[1] })} + min={minNumNodes} max={maxNumNodes} + firstSlider={pendingNumNodes.from} secondSlider={pendingNumNodes.to} /> + + {#if maxNumAccelerators != null && maxNumAccelerators > 1} + (pendingNumAccelerators = { from: detail[0], to: detail[1] })} + min={minNumAccelerators} max={maxNumAccelerators} + firstSlider={pendingNumAccelerators.from} secondSlider={pendingNumAccelerators.to} /> + {/if} +
    + + + + + +
    diff --git a/web/frontend/src/filters/StartTime.svelte b/web/frontend/src/filters/StartTime.svelte new file mode 100644 index 0000000..c89851d --- /dev/null +++ b/web/frontend/src/filters/StartTime.svelte @@ -0,0 +1,90 @@ + + + (isOpen = !isOpen)}> + + Select Start Time + + +

    From

    + + + + + + + + +

    To

    + + + + + + + + +
    + + + + + +
    diff --git a/web/frontend/src/filters/Stats.svelte b/web/frontend/src/filters/Stats.svelte new file mode 100644 index 0000000..e7b658d --- /dev/null +++ b/web/frontend/src/filters/Stats.svelte @@ -0,0 +1,113 @@ + + + (isOpen = !isOpen)}> + + Filter based on statistics (of non-running jobs) + + + {#each statistics as stat} +

    {stat.text}

    + (stat.from = detail[0], stat.to = detail[1], stat.enabled = true)} + min={0} max={stat.peak} + firstSlider={stat.from} secondSlider={stat.to} /> + {/each} +
    + + + + + +
    diff --git a/web/frontend/src/filters/Tags.svelte b/web/frontend/src/filters/Tags.svelte new file mode 100644 index 0000000..b5a145a --- /dev/null +++ b/web/frontend/src/filters/Tags.svelte @@ -0,0 +1,67 @@ + + + (isOpen = !isOpen)}> + + Select Tags + + + +
    + + {#if $initialized} + {#each fuzzySearchTags(searchTerm, allTags) as tag (tag)} + + {#if pendingTags.includes(tag.id)} + + {:else} + + {/if} + + + + {:else} + No Tags + {/each} + {/if} + +
    + + + + + +
    diff --git a/web/frontend/src/filters/TimeSelection.svelte b/web/frontend/src/filters/TimeSelection.svelte new file mode 100644 index 0000000..7d7cca4 --- /dev/null +++ b/web/frontend/src/filters/TimeSelection.svelte @@ -0,0 +1,80 @@ + + + + + + + {#if timeRange == -1} + from + updateExplicitTimeRange('from', event)}> + to + updateExplicitTimeRange('to', event)}> + {/if} + diff --git a/web/frontend/src/filters/UserOrProject.svelte b/web/frontend/src/filters/UserOrProject.svelte new file mode 100644 index 0000000..7f9f183 --- /dev/null +++ b/web/frontend/src/filters/UserOrProject.svelte @@ -0,0 +1,51 @@ + + + + + termChanged()} on:keyup={(event) => termChanged(event.key == 'Enter' ? 0 : throttle)} + placeholder={mode == 'user' ? 'filter username...' : 'filter project...'} /> + diff --git a/web/frontend/src/header.entrypoint.js b/web/frontend/src/header.entrypoint.js new file mode 100644 index 0000000..25ff134 --- /dev/null +++ b/web/frontend/src/header.entrypoint.js @@ -0,0 +1,10 @@ +import Header from './Header.svelte' + +const headerDomTarget = document.getElementById('svelte-header') + +if (headerDomTarget != null) { + new Header({ + target: headerDomTarget, + props: { ...header }, + }) +} diff --git a/web/frontend/src/job.entrypoint.js b/web/frontend/src/job.entrypoint.js new file mode 100644 index 0000000..f7bceb8 --- /dev/null +++ b/web/frontend/src/job.entrypoint.js @@ -0,0 +1,12 @@ +import {} from './header.entrypoint.js' +import Job from './Job.root.svelte' + +new Job({ + target: document.getElementById('svelte-app'), + props: { + dbid: jobInfos.id + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/joblist/JobInfo.svelte b/web/frontend/src/joblist/JobInfo.svelte new file mode 100644 index 0000000..58472e5 --- /dev/null +++ b/web/frontend/src/joblist/JobInfo.svelte @@ -0,0 +1,88 @@ + + + + +
    +

    + {job.jobId} ({job.cluster}) + {#if job.metaData?.jobName} +
    + {job.metaData.jobName} + {/if} + {#if job.arrayJobId} + Array Job: #{job.arrayJobId} + {/if} +

    + +

    + + + {scrambleNames ? scramble(job.user) : job.user} + + {#if job.userData && job.userData.name} + ({scrambleNames ? scramble(job.userData.name) : job.userData.name}) + {/if} + {#if job.project && job.project != 'no project'} +
    + {job.project} + {/if} +

    + +

    + {job.numNodes} + {#if job.exclusive != 1} + (shared) + {/if} + {#if job.numAcc > 0} + , {job.numAcc} + {/if} + {#if job.numHWThreads > 0} + , {job.numHWThreads} + {/if} +

    + +

    + Start: {(new Date(job.startTime)).toLocaleString()} +
    + Duration: {formatDuration(job.duration)} + {#if job.state == 'running'} + running + {:else if job.state != 'completed'} + {job.state} + {/if} + {#if job.walltime} +
    + Walltime: {formatDuration(job.walltime)} + {/if} +

    + +

    + {#each jobTags as tag} + + {/each} +

    +
    diff --git a/web/frontend/src/joblist/JobList.svelte b/web/frontend/src/joblist/JobList.svelte new file mode 100644 index 0000000..8cdca26 --- /dev/null +++ b/web/frontend/src/joblist/JobList.svelte @@ -0,0 +1,190 @@ + + + + +
    + + + + + {#each metrics as metric (metric)} + + {/each} + + + + {#if $jobs.error} + + + + {:else if $jobs.fetching || !$jobs.data} + + + + {:else if $jobs.data && $initialized} + {#each $jobs.data.jobs.items as job (job)} + + {:else} + + + + {/each} + {/if} + +
    + Job Info + + {metric} + {#if $initialized} + ({clusters + .map(cluster => cluster.metricConfig.find(m => m.name == metric)) + .filter(m => m != null).map(m => m.unit) + .reduce((arr, unit) => arr.includes(unit) ? arr : [...arr, unit], []) + .join(', ')}) + {/if} +
    +

    {$jobs.error.message}

    +
    + +
    + No jobs found +
    +
    +
    + + { + if (detail.itemsPerPage != itemsPerPage) { + itemsPerPage = detail.itemsPerPage + updateConfiguration({ + name: "plot_list_jobsPerPage", + value: itemsPerPage.toString() + }).then(res => { + if (res.error) + console.error(res.error); + }) + } + + paging = { itemsPerPage: detail.itemsPerPage, page: detail.page } + }} /> + + diff --git a/web/frontend/src/joblist/Pagination.svelte b/web/frontend/src/joblist/Pagination.svelte new file mode 100644 index 0000000..f7b7453 --- /dev/null +++ b/web/frontend/src/joblist/Pagination.svelte @@ -0,0 +1,230 @@ + + +
    +
    + +
    + + +
    + + { (page - 1) * itemsPerPage } - { Math.min((page - 1) * itemsPerPage + itemsPerPage, totalItems) } of { totalItems } { itemText } + +
    +
    + {#if !backButtonDisabled} + + + {/if} + {#if !nextButtonDisabled} + + {/if} +
    +
    + + + + diff --git a/web/frontend/src/joblist/Refresher.svelte b/web/frontend/src/joblist/Refresher.svelte new file mode 100644 index 0000000..2587711 --- /dev/null +++ b/web/frontend/src/joblist/Refresher.svelte @@ -0,0 +1,43 @@ + + + + + + + \ No newline at end of file diff --git a/web/frontend/src/joblist/Row.svelte b/web/frontend/src/joblist/Row.svelte new file mode 100644 index 0000000..b3a3655 --- /dev/null +++ b/web/frontend/src/joblist/Row.svelte @@ -0,0 +1,101 @@ + + + + + + + + + {#if job.monitoringStatus == 0 || job.monitoringStatus == 2} + + Not monitored or archiving failed + + {:else if $metricsQuery.fetching} + + + + {:else if $metricsQuery.error} + + + {$metricsQuery.error.message.length > 500 + ? $metricsQuery.error.message.substring(0, 499)+'...' + : $metricsQuery.error.message} + + + {:else} + {#each sortAndSelectScope($metricsQuery.data.jobMetrics) as metric, i (metric || i)} + + {#if metric != null} + + {:else} + Missing Data + {/if} + + {/each} + {/if} + diff --git a/web/frontend/src/joblist/SortSelection.svelte b/web/frontend/src/joblist/SortSelection.svelte new file mode 100644 index 0000000..5941964 --- /dev/null +++ b/web/frontend/src/joblist/SortSelection.svelte @@ -0,0 +1,71 @@ + + + + + { isOpen = !isOpen }}> + + Sort rows + + + + {#each sortableColumns as col, i (col)} + + + + {col.text} + + {/each} + + + + + + + + \ No newline at end of file diff --git a/web/frontend/src/jobs.entrypoint.js b/web/frontend/src/jobs.entrypoint.js new file mode 100644 index 0000000..1763a8b --- /dev/null +++ b/web/frontend/src/jobs.entrypoint.js @@ -0,0 +1,12 @@ +import {} from './header.entrypoint.js' +import Jobs from './Jobs.root.svelte' + +new Jobs({ + target: document.getElementById('svelte-app'), + props: { + filterPresets: filterPresets + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/list.entrypoint.js b/web/frontend/src/list.entrypoint.js new file mode 100644 index 0000000..21c8f5d --- /dev/null +++ b/web/frontend/src/list.entrypoint.js @@ -0,0 +1,13 @@ +import {} from './header.entrypoint.js' +import List from './List.root.svelte' + +new List({ + target: document.getElementById('svelte-app'), + props: { + filterPresets: filterPresets, + type: listType, + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/node.entrypoint.js b/web/frontend/src/node.entrypoint.js new file mode 100644 index 0000000..e6e6f9a --- /dev/null +++ b/web/frontend/src/node.entrypoint.js @@ -0,0 +1,15 @@ +import {} from './header.entrypoint.js' +import Node from './Node.root.svelte' + +new Node({ + target: document.getElementById('svelte-app'), + props: { + cluster: infos.cluster, + hostname: infos.hostname, + from: infos.from, + to: infos.to + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/plots/Histogram.svelte b/web/frontend/src/plots/Histogram.svelte new file mode 100644 index 0000000..c00de12 --- /dev/null +++ b/web/frontend/src/plots/Histogram.svelte @@ -0,0 +1,210 @@ + + +
    (infoText = '')}> + {infoText} + +
    + + + + + + \ No newline at end of file diff --git a/web/frontend/src/plots/MetricPlot.svelte b/web/frontend/src/plots/MetricPlot.svelte new file mode 100644 index 0000000..d47d813 --- /dev/null +++ b/web/frontend/src/plots/MetricPlot.svelte @@ -0,0 +1,306 @@ + + + + +
    + diff --git a/web/frontend/src/plots/Polar.svelte b/web/frontend/src/plots/Polar.svelte new file mode 100644 index 0000000..6731d8a --- /dev/null +++ b/web/frontend/src/plots/Polar.svelte @@ -0,0 +1,190 @@ +
    + +
    + + diff --git a/web/frontend/src/plots/Roofline.svelte b/web/frontend/src/plots/Roofline.svelte new file mode 100644 index 0000000..d385f0d --- /dev/null +++ b/web/frontend/src/plots/Roofline.svelte @@ -0,0 +1,355 @@ +
    + +
    + + + + diff --git a/web/frontend/src/plots/Scatter.svelte b/web/frontend/src/plots/Scatter.svelte new file mode 100644 index 0000000..f3c955c --- /dev/null +++ b/web/frontend/src/plots/Scatter.svelte @@ -0,0 +1,171 @@ +
    + +
    + + + + diff --git a/web/frontend/src/status.entrypoint.js b/web/frontend/src/status.entrypoint.js new file mode 100644 index 0000000..39c374b --- /dev/null +++ b/web/frontend/src/status.entrypoint.js @@ -0,0 +1,12 @@ +import {} from './header.entrypoint.js' +import Status from './Status.root.svelte' + +new Status({ + target: document.getElementById('svelte-app'), + props: { + cluster: infos.cluster, + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/systems.entrypoint.js b/web/frontend/src/systems.entrypoint.js new file mode 100644 index 0000000..846bd36 --- /dev/null +++ b/web/frontend/src/systems.entrypoint.js @@ -0,0 +1,14 @@ +import {} from './header.entrypoint.js' +import Systems from './Systems.root.svelte' + +new Systems({ + target: document.getElementById('svelte-app'), + props: { + cluster: infos.cluster, + from: infos.from, + to: infos.to + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/user.entrypoint.js b/web/frontend/src/user.entrypoint.js new file mode 100644 index 0000000..0bff82a --- /dev/null +++ b/web/frontend/src/user.entrypoint.js @@ -0,0 +1,13 @@ +import {} from './header.entrypoint.js' +import User from './User.root.svelte' + +new User({ + target: document.getElementById('svelte-app'), + props: { + filterPresets: filterPresets, + user: userInfos + }, + context: new Map([ + ['cc-config', clusterCockpitConfig] + ]) +}) diff --git a/web/frontend/src/utils.js b/web/frontend/src/utils.js new file mode 100644 index 0000000..decfdc6 --- /dev/null +++ b/web/frontend/src/utils.js @@ -0,0 +1,288 @@ +import { expiringCacheExchange } from './cache-exchange.js' +import { initClient } from '@urql/svelte' +import { setContext, getContext, hasContext, onDestroy, tick } from 'svelte' +import { dedupExchange, fetchExchange } from '@urql/core' +import { readable } from 'svelte/store' + +/* + * Call this function only at component initialization time! + * + * It does several things: + * - Initialize the GraphQL client + * - Creates a readable store 'initialization' which indicates when the values below can be used. + * - Adds 'tags' to the context (list of all tags) + * - Adds 'clusters' to the context (object with cluster names as keys) + * - Adds 'metrics' to the context, a function that takes a cluster and metric name and returns the MetricConfig (or undefined) + */ +export function init(extraInitQuery = '') { + const jwt = hasContext('jwt') + ? getContext('jwt') + : getContext('cc-config')['jwt'] + + const client = initClient({ + url: `${window.location.origin}/query`, + fetchOptions: jwt != null + ? { headers: { 'Authorization': `Bearer ${jwt}` } } : {}, + exchanges: [ + dedupExchange, + expiringCacheExchange({ + ttl: 5 * 60 * 1000, + maxSize: 150, + }), + fetchExchange + ] + }) + + const query = client.query(`query { + clusters { + name, + metricConfig { + name, unit, peak, + normal, caution, alert, + timestep, scope, + aggregation, + subClusters { name, peak, normal, caution, alert } + } + filterRanges { + duration { from, to } + numNodes { from, to } + startTime { from, to } + } + partitions + subClusters { + name, processorType + socketsPerNode + coresPerSocket + threadsPerCore + flopRateScalar + flopRateSimd + memoryBandwidth + numberOfNodes + topology { + node, socket, core + accelerators { id } + } + } + } + tags { id, name, type } + ${extraInitQuery} + }`).toPromise() + + let state = { fetching: true, error: null, data: null } + let subscribers = [] + const subscribe = (callback) => { + callback(state) + subscribers.push(callback) + return () => { + subscribers = subscribers.filter(cb => cb != callback) + } + }; + + const tags = [], clusters = [] + setContext('tags', tags) + setContext('clusters', clusters) + setContext('metrics', (cluster, metric) => { + if (typeof cluster !== 'object') + cluster = clusters.find(c => c.name == cluster) + + return cluster.metricConfig.find(m => m.name == metric) + }) + setContext('on-init', callback => state.fetching + ? subscribers.push(callback) + : callback(state)) + setContext('initialized', readable(false, (set) => + subscribers.push(() => set(true)))) + + query.then(({ error, data }) => { + state.fetching = false + if (error != null) { + console.error(error) + state.error = error + tick().then(() => subscribers.forEach(cb => cb(state))) + return + } + + for (let tag of data.tags) + tags.push(tag) + + for (let cluster of data.clusters) + clusters.push(cluster) + + state.data = data + tick().then(() => subscribers.forEach(cb => cb(state))) + }) + + return { + query: { subscribe }, + tags, + clusters, + } +} + +export function formatNumber(x) { + let suffix = '' + if (x >= 1000000000) { + x /= 1000000 + suffix = 'G' + } else if (x >= 1000000) { + x /= 1000000 + suffix = 'M' + } else if (x >= 1000) { + x /= 1000 + suffix = 'k' + } + + return `${(Math.round(x * 100) / 100)}${suffix}` +} + +// Use https://developer.mozilla.org/en-US/docs/Web/API/structuredClone instead? +export function deepCopy(x) { + return JSON.parse(JSON.stringify(x)) +} + +function fuzzyMatch(term, string) { + return string.toLowerCase().includes(term) +} + +export function fuzzySearchTags(term, tags) { + if (!tags) + return [] + + let results = [] + let termparts = term.split(':').map(s => s.trim()).filter(s => s.length > 0) + + if (termparts.length == 0) { + results = tags.slice() + } else if (termparts.length == 1) { + for (let tag of tags) + if (fuzzyMatch(termparts[0], tag.type) + || fuzzyMatch(termparts[0], tag.name)) + results.push(tag) + } else if (termparts.length == 2) { + for (let tag of tags) + if (fuzzyMatch(termparts[0], tag.type) + && fuzzyMatch(termparts[1], tag.name)) + results.push(tag) + } + + return results.sort((a, b) => { + if (a.type < b.type) return -1 + if (a.type > b.type) return 1 + if (a.name < b.name) return -1 + if (a.name > b.name) return 1 + return 0 + }) +} + +export function groupByScope(jobMetrics) { + let metrics = new Map() + for (let metric of jobMetrics) { + if (metrics.has(metric.name)) + metrics.get(metric.name).push(metric) + else + metrics.set(metric.name, [metric]) + } + + return [...metrics.values()].sort((a, b) => a[0].name.localeCompare(b[0].name)) +} + +const scopeGranularity = { + "node": 10, + "socket": 5, + "accelerator": 5, + "core": 2, + "hwthread": 1 +}; + +export function maxScope(scopes) { + console.assert(scopes.length > 0 && scopes.every(x => scopeGranularity[x] != null)) + let sm = scopes[0], gran = scopeGranularity[scopes[0]] + for (let scope of scopes) { + let otherGran = scopeGranularity[scope] + if (otherGran > gran) { + sm = scope + gran = otherGran + } + } + return sm +} + +export function minScope(scopes) { + console.assert(scopes.length > 0 && scopes.every(x => scopeGranularity[x] != null)) + let sm = scopes[0], gran = scopeGranularity[scopes[0]] + for (let scope of scopes) { + let otherGran = scopeGranularity[scope] + if (otherGran < gran) { + sm = scope + gran = otherGran + } + } + return sm +} + +export async function fetchMetrics(job, metrics, scopes) { + if (job.monitoringStatus == 0) + return null + + let query = [] + if (metrics != null) { + for (let metric of metrics) { + query.push(`metric=${metric}`) + } + } + if (scopes != null) { + for (let scope of scopes) { + query.push(`scope=${scope}`) + } + } + + try { + let res = await fetch(`/api/jobs/metrics/${job.id}${(query.length > 0) ? '?' : ''}${query.join('&')}`) + if (res.status != 200) { + return { error: { status: res.status, message: await res.text() } } + } + + return await res.json() + } catch (e) { + return { error: e } + } +} + +export function fetchMetricsStore() { + let set = null + return [ + readable({ fetching: true, error: null, data: null }, (_set) => { set = _set }), + (job, metrics, scopes) => fetchMetrics(job, metrics, scopes).then(res => set({ + fetching: false, + error: res.error, + data: res.data + })) + ] +} + +export function stickyHeader(datatableHeaderSelector, updatePading) { + const header = document.querySelector('header > nav.navbar') + if (!header) + return + + let ticking = false, datatableHeader = null + const onscroll = event => { + if (ticking) + return + + ticking = true + window.requestAnimationFrame(() => { + ticking = false + if (!datatableHeader) + datatableHeader = document.querySelector(datatableHeaderSelector) + + const top = datatableHeader.getBoundingClientRect().top + updatePading(top < header.clientHeight + ? (header.clientHeight - top) + 10 + : 10) + }) + } + + document.addEventListener('scroll', onscroll) + onDestroy(() => document.removeEventListener('scroll', onscroll)) +} diff --git a/web/frontend/yarn.lock b/web/frontend/yarn.lock new file mode 100644 index 0000000..f80e078 --- /dev/null +++ b/web/frontend/yarn.lock @@ -0,0 +1,493 @@ +# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY. +# yarn lockfile v1 + + +"@babel/code-frame@^7.10.4": + version "7.16.0" + resolved "https://registry.yarnpkg.com/@babel/code-frame/-/code-frame-7.16.0.tgz#0dfc80309beec8411e65e706461c408b0bb9b431" + integrity sha512-IF4EOMEV+bfYwOmNxGzSnjR2EmQod7f1UXOpZM3l4i4o4QNwzjtJAu/HxdjHq0aYBvdqMuQEY1eg0nqW9ZPORA== + dependencies: + "@babel/highlight" "^7.16.0" + +"@babel/helper-validator-identifier@^7.15.7": + version "7.15.7" + resolved "https://registry.yarnpkg.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.15.7.tgz#220df993bfe904a4a6b02ab4f3385a5ebf6e2389" + integrity sha512-K4JvCtQqad9OY2+yTU8w+E82ywk/fe+ELNlt1G8z3bVGlZfn/hOcQQsUhGhW/N+tb3fxK800wLtKOE/aM0m72w== + +"@babel/highlight@^7.16.0": + version "7.16.0" + resolved "https://registry.yarnpkg.com/@babel/highlight/-/highlight-7.16.0.tgz#6ceb32b2ca4b8f5f361fb7fd821e3fddf4a1725a" + integrity sha512-t8MH41kUQylBtu2+4IQA3atqevA2lRgqA2wyVB/YiWmsDSuylZZuXOUy9ric30hfzauEFfdsuk/eXTRrGrfd0g== + dependencies: + "@babel/helper-validator-identifier" "^7.15.7" + chalk "^2.0.0" + js-tokens "^4.0.0" + +"@graphql-typed-document-node/core@^3.1.0": + version "3.1.1" + resolved "https://registry.yarnpkg.com/@graphql-typed-document-node/core/-/core-3.1.1.tgz#076d78ce99822258cf813ecc1e7fa460fa74d052" + integrity sha512-NQ17ii0rK1b34VZonlmT2QMJFI70m0TRwbknO/ihlbatXyaktDhN/98vBiUU6kNBPljqGqyIrl2T4nY2RpFANg== + +"@popperjs/core@^2.9.2": + version "2.11.0" + resolved "https://registry.yarnpkg.com/@popperjs/core/-/core-2.11.0.tgz#6734f8ebc106a0860dff7f92bf90df193f0935d7" + integrity sha512-zrsUxjLOKAzdewIDRWy9nsV1GQsKBCWaGwsZQlCgr6/q+vjyZhFgqedLfFBuI9anTPEUT4APq9Mu0SZBTzIcGQ== + +"@rollup/plugin-commonjs@^17.0.0": + version "17.1.0" + resolved "https://registry.yarnpkg.com/@rollup/plugin-commonjs/-/plugin-commonjs-17.1.0.tgz#757ec88737dffa8aa913eb392fade2e45aef2a2d" + integrity sha512-PoMdXCw0ZyvjpCMT5aV4nkL0QywxP29sODQsSGeDpr/oI49Qq9tRtAsb/LbYbDzFlOydVEqHmmZWFtXJEAX9ew== + dependencies: + "@rollup/pluginutils" "^3.1.0" + commondir "^1.0.1" + estree-walker "^2.0.1" + glob "^7.1.6" + is-reference "^1.2.1" + magic-string "^0.25.7" + resolve "^1.17.0" + +"@rollup/plugin-node-resolve@^11.0.0": + version "11.2.1" + resolved "https://registry.yarnpkg.com/@rollup/plugin-node-resolve/-/plugin-node-resolve-11.2.1.tgz#82aa59397a29cd4e13248b106e6a4a1880362a60" + integrity sha512-yc2n43jcqVyGE2sqV5/YCmocy9ArjVAP/BeXyTtADTBBX6V0e5UMqwO8CdQ0kzjb6zu5P1qMzsScCMRvE9OlVg== + dependencies: + "@rollup/pluginutils" "^3.1.0" + "@types/resolve" "1.17.1" + builtin-modules "^3.1.0" + deepmerge "^4.2.2" + is-module "^1.0.0" + resolve "^1.19.0" + +"@rollup/plugin-replace@^2.4.1": + version "2.4.2" + resolved "https://registry.yarnpkg.com/@rollup/plugin-replace/-/plugin-replace-2.4.2.tgz#a2d539314fbc77c244858faa523012825068510a" + integrity sha512-IGcu+cydlUMZ5En85jxHH4qj2hta/11BHq95iHEyb2sbgiN0eCdzvUcHw5gt9pBL5lTi4JDYJ1acCoMGpTvEZg== + dependencies: + "@rollup/pluginutils" "^3.1.0" + magic-string "^0.25.7" + +"@rollup/pluginutils@4": + version "4.1.1" + resolved "https://registry.yarnpkg.com/@rollup/pluginutils/-/pluginutils-4.1.1.tgz#1d4da86dd4eded15656a57d933fda2b9a08d47ec" + integrity sha512-clDjivHqWGXi7u+0d2r2sBi4Ie6VLEAzWMIkvJLnDmxoOhBYOTfzGbOQBA32THHm11/LiJbd01tJUpJsbshSWQ== + dependencies: + estree-walker "^2.0.1" + picomatch "^2.2.2" + +"@rollup/pluginutils@^3.1.0": + version "3.1.0" + resolved "https://registry.yarnpkg.com/@rollup/pluginutils/-/pluginutils-3.1.0.tgz#706b4524ee6dc8b103b3c995533e5ad680c02b9b" + integrity sha512-GksZ6pr6TpIjHm8h9lSQ8pi8BE9VeubNT0OMJ3B5uZJ8pz73NPiqOtCog/x2/QzM1ENChPKxMDhiQuRHsqc+lg== + dependencies: + "@types/estree" "0.0.39" + estree-walker "^1.0.1" + picomatch "^2.2.2" + +"@types/estree@*": + version "0.0.50" + resolved "https://registry.yarnpkg.com/@types/estree/-/estree-0.0.50.tgz#1e0caa9364d3fccd2931c3ed96fdbeaa5d4cca83" + integrity sha512-C6N5s2ZFtuZRj54k2/zyRhNDjJwwcViAM3Nbm8zjBpbqAdZ00mr0CFxvSKeO8Y/e03WVFLpQMdHYVfUd6SB+Hw== + +"@types/estree@0.0.39": + version "0.0.39" + resolved "https://registry.yarnpkg.com/@types/estree/-/estree-0.0.39.tgz#e177e699ee1b8c22d23174caaa7422644389509f" + integrity sha512-EYNwp3bU+98cpU4lAWYYL7Zz+2gryWH1qbdDTidVd6hkiR6weksdbMadyXKXNPEkQFhXM+hVO9ZygomHXp+AIw== + +"@types/node@*": + version "16.11.12" + resolved "https://registry.yarnpkg.com/@types/node/-/node-16.11.12.tgz#ac7fb693ac587ee182c3780c26eb65546a1a3c10" + integrity sha512-+2Iggwg7PxoO5Kyhvsq9VarmPbIelXP070HMImEpbtGCoyWNINQj4wzjbQCXzdHTRXnqufutJb5KAURZANNBAw== + +"@types/resolve@1.17.1": + version "1.17.1" + resolved "https://registry.yarnpkg.com/@types/resolve/-/resolve-1.17.1.tgz#3afd6ad8967c77e4376c598a82ddd58f46ec45d6" + integrity sha512-yy7HuzQhj0dhGpD8RLXSZWEkLsV9ibvxvi6EiJ3bkqLAO1RGo0WbkWQiwpRlSFymTJRz0d3k5LM3kkx8ArDbLw== + dependencies: + "@types/node" "*" + +"@urql/core@^2.3.4": + version "2.3.5" + resolved "https://registry.yarnpkg.com/@urql/core/-/core-2.3.5.tgz#eb1cbbfe23236615ecb8e65850bb772d4f61b6b5" + integrity sha512-kM/um4OjXmuN6NUS/FSm7dESEKWT7By1kCRCmjvU4+4uEoF1cd4TzIhQ7J1I3zbDAFhZzmThq9X0AHpbHAn3bA== + dependencies: + "@graphql-typed-document-node/core" "^3.1.0" + wonka "^4.0.14" + +"@urql/svelte@^1.3.0": + version "1.3.2" + resolved "https://registry.yarnpkg.com/@urql/svelte/-/svelte-1.3.2.tgz#7fc16253a36669dddec39755fc9c31077a9c279a" + integrity sha512-L/fSKb+jTrxfeKbnA4+7T69sL0XlzMv4d9i0j9J+fCkBCpUOGgPsYzsyBttbVbjrlaw61Wrc6J2NKuokrd570w== + dependencies: + "@urql/core" "^2.3.4" + wonka "^4.0.14" + +ansi-styles@^3.2.1: + version "3.2.1" + resolved "https://registry.yarnpkg.com/ansi-styles/-/ansi-styles-3.2.1.tgz#41fbb20243e50b12be0f04b8dedbf07520ce841d" + integrity sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA== + dependencies: + color-convert "^1.9.0" + +balanced-match@^1.0.0: + version "1.0.2" + resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee" + integrity sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw== + +brace-expansion@^1.1.7: + version "1.1.11" + resolved "https://registry.yarnpkg.com/brace-expansion/-/brace-expansion-1.1.11.tgz#3c7fcbf529d87226f3d2f52b966ff5271eb441dd" + integrity sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA== + dependencies: + balanced-match "^1.0.0" + concat-map "0.0.1" + +buffer-from@^1.0.0: + version "1.1.2" + resolved "https://registry.yarnpkg.com/buffer-from/-/buffer-from-1.1.2.tgz#2b146a6fd72e80b4f55d255f35ed59a3a9a41bd5" + integrity sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ== + +builtin-modules@^3.1.0: + version "3.2.0" + resolved "https://registry.yarnpkg.com/builtin-modules/-/builtin-modules-3.2.0.tgz#45d5db99e7ee5e6bc4f362e008bf917ab5049887" + integrity sha512-lGzLKcioL90C7wMczpkY0n/oART3MbBa8R9OFGE1rJxoVI86u4WAGfEk8Wjv10eKSyTHVGkSo3bvBylCEtk7LA== + +chalk@^2.0.0: + version "2.4.2" + resolved "https://registry.yarnpkg.com/chalk/-/chalk-2.4.2.tgz#cd42541677a54333cf541a49108c1432b44c9424" + integrity sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ== + dependencies: + ansi-styles "^3.2.1" + escape-string-regexp "^1.0.5" + supports-color "^5.3.0" + +color-convert@^1.9.0: + version "1.9.3" + resolved "https://registry.yarnpkg.com/color-convert/-/color-convert-1.9.3.tgz#bb71850690e1f136567de629d2d5471deda4c1e8" + integrity sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg== + dependencies: + color-name "1.1.3" + +color-name@1.1.3: + version "1.1.3" + resolved "https://registry.yarnpkg.com/color-name/-/color-name-1.1.3.tgz#a7d0558bd89c42f795dd42328f740831ca53bc25" + integrity sha1-p9BVi9icQveV3UIyj3QIMcpTvCU= + +commander@^2.20.0: + version "2.20.3" + resolved "https://registry.yarnpkg.com/commander/-/commander-2.20.3.tgz#fd485e84c03eb4881c20722ba48035e8531aeb33" + integrity sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ== + +commondir@^1.0.1: + version "1.0.1" + resolved "https://registry.yarnpkg.com/commondir/-/commondir-1.0.1.tgz#ddd800da0c66127393cca5950ea968a3aaf1253b" + integrity sha1-3dgA2gxmEnOTzKWVDqloo6rxJTs= + +concat-map@0.0.1: + version "0.0.1" + resolved "https://registry.yarnpkg.com/concat-map/-/concat-map-0.0.1.tgz#d8a96bd77fd68df7793a73036a3ba0d5405d477b" + integrity sha1-2Klr13/Wjfd5OnMDajug1UBdR3s= + +deepmerge@^4.2.2: + version "4.2.2" + resolved "https://registry.yarnpkg.com/deepmerge/-/deepmerge-4.2.2.tgz#44d2ea3679b8f4d4ffba33f03d865fc1e7bf4955" + integrity sha512-FJ3UgI4gIl+PHZm53knsuSFpE+nESMr7M4v9QcgB7S63Kj/6WqMiFQJpBBYz1Pt+66bZpP3Q7Lye0Oo9MPKEdg== + +escape-string-regexp@^1.0.5: + version "1.0.5" + resolved "https://registry.yarnpkg.com/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz#1b61c0562190a8dff6ae3bb2cf0200ca130b86d4" + integrity sha1-G2HAViGQqN/2rjuyzwIAyhMLhtQ= + +estree-walker@^0.6.1: + version "0.6.1" + resolved "https://registry.yarnpkg.com/estree-walker/-/estree-walker-0.6.1.tgz#53049143f40c6eb918b23671d1fe3219f3a1b362" + integrity sha512-SqmZANLWS0mnatqbSfRP5g8OXZC12Fgg1IwNtLsyHDzJizORW4khDfjPqJZsemPWBB2uqykUah5YpQ6epsqC/w== + +estree-walker@^1.0.1: + version "1.0.1" + resolved "https://registry.yarnpkg.com/estree-walker/-/estree-walker-1.0.1.tgz#31bc5d612c96b704106b477e6dd5d8aa138cb700" + integrity sha512-1fMXF3YP4pZZVozF8j/ZLfvnR8NSIljt56UhbZ5PeeDmmGHpgpdwQt7ITlGvYaQukCvuBRMLEiKiYC+oeIg4cg== + +estree-walker@^2.0.1: + version "2.0.2" + resolved "https://registry.yarnpkg.com/estree-walker/-/estree-walker-2.0.2.tgz#52f010178c2a4c117a7757cfe942adb7d2da4cac" + integrity sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w== + +fs.realpath@^1.0.0: + version "1.0.0" + resolved "https://registry.yarnpkg.com/fs.realpath/-/fs.realpath-1.0.0.tgz#1504ad2523158caa40db4a2787cb01411994ea4f" + integrity sha1-FQStJSMVjKpA20onh8sBQRmU6k8= + +fsevents@~2.3.2: + version "2.3.2" + resolved "https://registry.yarnpkg.com/fsevents/-/fsevents-2.3.2.tgz#8a526f78b8fdf4623b709e0b975c52c24c02fd1a" + integrity sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA== + +function-bind@^1.1.1: + version "1.1.1" + resolved "https://registry.yarnpkg.com/function-bind/-/function-bind-1.1.1.tgz#a56899d3ea3c9bab874bb9773b7c5ede92f4895d" + integrity sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A== + +glob@^7.1.6: + version "7.2.0" + resolved "https://registry.yarnpkg.com/glob/-/glob-7.2.0.tgz#d15535af7732e02e948f4c41628bd910293f6023" + integrity sha512-lmLf6gtyrPq8tTjSmrO94wBeQbFR3HbLHbuyD69wuyQkImp2hWqMGB47OX65FBkPffO641IP9jWa1z4ivqG26Q== + dependencies: + fs.realpath "^1.0.0" + inflight "^1.0.4" + inherits "2" + minimatch "^3.0.4" + once "^1.3.0" + path-is-absolute "^1.0.0" + +graphql@^15.6.0: + version "15.8.0" + resolved "https://registry.yarnpkg.com/graphql/-/graphql-15.8.0.tgz#33410e96b012fa3bdb1091cc99a94769db212b38" + integrity sha512-5gghUc24tP9HRznNpV2+FIoq3xKkj5dTQqf4v0CpdPbFVwFkWoxOM+o+2OC9ZSvjEMTjfmG9QT+gcvggTwW1zw== + +has-flag@^3.0.0: + version "3.0.0" + resolved "https://registry.yarnpkg.com/has-flag/-/has-flag-3.0.0.tgz#b5d454dc2199ae225699f3467e5a07f3b955bafd" + integrity sha1-tdRU3CGZriJWmfNGfloH87lVuv0= + +has-flag@^4.0.0: + version "4.0.0" + resolved "https://registry.yarnpkg.com/has-flag/-/has-flag-4.0.0.tgz#944771fd9c81c81265c4d6941860da06bb59479b" + integrity sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ== + +has@^1.0.3: + version "1.0.3" + resolved "https://registry.yarnpkg.com/has/-/has-1.0.3.tgz#722d7cbfc1f6aa8241f16dd814e011e1f41e8796" + integrity sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw== + dependencies: + function-bind "^1.1.1" + +inflight@^1.0.4: + version "1.0.6" + resolved "https://registry.yarnpkg.com/inflight/-/inflight-1.0.6.tgz#49bd6331d7d02d0c09bc910a1075ba8165b56df9" + integrity sha1-Sb1jMdfQLQwJvJEKEHW6gWW1bfk= + dependencies: + once "^1.3.0" + wrappy "1" + +inherits@2: + version "2.0.4" + resolved "https://registry.yarnpkg.com/inherits/-/inherits-2.0.4.tgz#0fa2c64f932917c3433a0ded55363aae37416b7c" + integrity sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ== + +is-core-module@^2.2.0: + version "2.8.0" + resolved "https://registry.yarnpkg.com/is-core-module/-/is-core-module-2.8.0.tgz#0321336c3d0925e497fd97f5d95cb114a5ccd548" + integrity sha512-vd15qHsaqrRL7dtH6QNuy0ndJmRDrS9HAM1CAiSifNUFv4x1a0CCVsj18hJ1mShxIG6T2i1sO78MkP56r0nYRw== + dependencies: + has "^1.0.3" + +is-module@^1.0.0: + version "1.0.0" + resolved "https://registry.yarnpkg.com/is-module/-/is-module-1.0.0.tgz#3258fb69f78c14d5b815d664336b4cffb6441591" + integrity sha1-Mlj7afeMFNW4FdZkM2tM/7ZEFZE= + +is-reference@^1.2.1: + version "1.2.1" + resolved "https://registry.yarnpkg.com/is-reference/-/is-reference-1.2.1.tgz#8b2dac0b371f4bc994fdeaba9eb542d03002d0b7" + integrity sha512-U82MsXXiFIrjCK4otLT+o2NA2Cd2g5MLoOVXUZjIOhLurrRxpEXzI8O0KZHr3IjLvlAH1kTPYSuqer5T9ZVBKQ== + dependencies: + "@types/estree" "*" + +jest-worker@^26.2.1: + version "26.6.2" + resolved "https://registry.yarnpkg.com/jest-worker/-/jest-worker-26.6.2.tgz#7f72cbc4d643c365e27b9fd775f9d0eaa9c7a8ed" + integrity sha512-KWYVV1c4i+jbMpaBC+U++4Va0cp8OisU185o73T1vo99hqi7w8tSJfUXYswwqqrjzwxa6KpRK54WhPvwf5w6PQ== + dependencies: + "@types/node" "*" + merge-stream "^2.0.0" + supports-color "^7.0.0" + +js-tokens@^4.0.0: + version "4.0.0" + resolved "https://registry.yarnpkg.com/js-tokens/-/js-tokens-4.0.0.tgz#19203fb59991df98e3a287050d4647cdeaf32499" + integrity sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ== + +magic-string@^0.25.7: + version "0.25.7" + resolved "https://registry.yarnpkg.com/magic-string/-/magic-string-0.25.7.tgz#3f497d6fd34c669c6798dcb821f2ef31f5445051" + integrity sha512-4CrMT5DOHTDk4HYDlzmwu4FVCcIYI8gauveasrdCu2IKIFOJ3f0v/8MDGJCDL9oD2ppz/Av1b0Nj345H9M+XIA== + dependencies: + sourcemap-codec "^1.4.4" + +merge-stream@^2.0.0: + version "2.0.0" + resolved "https://registry.yarnpkg.com/merge-stream/-/merge-stream-2.0.0.tgz#52823629a14dd00c9770fb6ad47dc6310f2c1f60" + integrity sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w== + +minimatch@^3.0.4: + version "3.0.4" + resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.0.4.tgz#5166e286457f03306064be5497e8dbb0c3d32083" + integrity sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA== + dependencies: + brace-expansion "^1.1.7" + +once@^1.3.0: + version "1.4.0" + resolved "https://registry.yarnpkg.com/once/-/once-1.4.0.tgz#583b1aa775961d4b113ac17d9c50baef9dd76bd1" + integrity sha1-WDsap3WWHUsROsF9nFC6753Xa9E= + dependencies: + wrappy "1" + +path-is-absolute@^1.0.0: + version "1.0.1" + resolved "https://registry.yarnpkg.com/path-is-absolute/-/path-is-absolute-1.0.1.tgz#174b9268735534ffbc7ace6bf53a5a9e1b5c5f5f" + integrity sha1-F0uSaHNVNP+8es5r9TpanhtcX18= + +path-parse@^1.0.6: + version "1.0.7" + resolved "https://registry.yarnpkg.com/path-parse/-/path-parse-1.0.7.tgz#fbc114b60ca42b30d9daf5858e4bd68bbedb6735" + integrity sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw== + +picomatch@^2.2.2: + version "2.3.0" + resolved "https://registry.yarnpkg.com/picomatch/-/picomatch-2.3.0.tgz#f1f061de8f6a4bf022892e2d128234fb98302972" + integrity sha512-lY1Q/PiJGC2zOv/z391WOTD+Z02bCgsFfvxoXXf6h7kv9o+WmsmzYqrAwY63sNgOxE4xEdq0WyUnXfKeBrSvYw== + +randombytes@^2.1.0: + version "2.1.0" + resolved "https://registry.yarnpkg.com/randombytes/-/randombytes-2.1.0.tgz#df6f84372f0270dc65cdf6291349ab7a473d4f2a" + integrity sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ== + dependencies: + safe-buffer "^5.1.0" + +require-relative@^0.8.7: + version "0.8.7" + resolved "https://registry.yarnpkg.com/require-relative/-/require-relative-0.8.7.tgz#7999539fc9e047a37928fa196f8e1563dabd36de" + integrity sha1-eZlTn8ngR6N5KPoZb44VY9q9Nt4= + +resolve@^1.17.0, resolve@^1.19.0: + version "1.20.0" + resolved "https://registry.yarnpkg.com/resolve/-/resolve-1.20.0.tgz#629a013fb3f70755d6f0b7935cc1c2c5378b1975" + integrity sha512-wENBPt4ySzg4ybFQW2TT1zMQucPK95HSh/nq2CFTZVOGut2+pQvSsgtda4d26YrYcr067wjbmzOG8byDPBX63A== + dependencies: + is-core-module "^2.2.0" + path-parse "^1.0.6" + +rollup-plugin-css-only@^3.1.0: + version "3.1.0" + resolved "https://registry.yarnpkg.com/rollup-plugin-css-only/-/rollup-plugin-css-only-3.1.0.tgz#6a701cc5b051c6b3f0961e69b108a9a118e1b1df" + integrity sha512-TYMOE5uoD76vpj+RTkQLzC9cQtbnJNktHPB507FzRWBVaofg7KhIqq1kGbcVOadARSozWF883Ho9KpSPKH8gqA== + dependencies: + "@rollup/pluginutils" "4" + +rollup-plugin-svelte@^7.0.0: + version "7.1.0" + resolved "https://registry.yarnpkg.com/rollup-plugin-svelte/-/rollup-plugin-svelte-7.1.0.tgz#d45f2b92b1014be4eb46b55aa033fb9a9c65f04d" + integrity sha512-vopCUq3G+25sKjwF5VilIbiY6KCuMNHP1PFvx2Vr3REBNMDllKHFZN2B9jwwC+MqNc3UPKkjXnceLPEjTjXGXg== + dependencies: + require-relative "^0.8.7" + rollup-pluginutils "^2.8.2" + +rollup-plugin-terser@^7.0.0: + version "7.0.2" + resolved "https://registry.yarnpkg.com/rollup-plugin-terser/-/rollup-plugin-terser-7.0.2.tgz#e8fbba4869981b2dc35ae7e8a502d5c6c04d324d" + integrity sha512-w3iIaU4OxcF52UUXiZNsNeuXIMDvFrr+ZXK6bFZ0Q60qyVfq4uLptoS4bbq3paG3x216eQllFZX7zt6TIImguQ== + dependencies: + "@babel/code-frame" "^7.10.4" + jest-worker "^26.2.1" + serialize-javascript "^4.0.0" + terser "^5.0.0" + +rollup-pluginutils@^2.8.2: + version "2.8.2" + resolved "https://registry.yarnpkg.com/rollup-pluginutils/-/rollup-pluginutils-2.8.2.tgz#72f2af0748b592364dbd3389e600e5a9444a351e" + integrity sha512-EEp9NhnUkwY8aif6bxgovPHMoMoNr2FulJziTndpt5H9RdwC47GSGuII9XxpSdzVGM0GWrNPHV6ie1LTNJPaLQ== + dependencies: + estree-walker "^0.6.1" + +rollup@^2.3.4: + version "2.61.0" + resolved "https://registry.yarnpkg.com/rollup/-/rollup-2.61.0.tgz#ccd927bcd6cc0c78a4689c918627a717977208f4" + integrity sha512-teQ+T1mUYbyvGyUavCodiyA9hD4DxwYZJwr/qehZGhs1Z49vsmzelMVYMxGU4ZhGRKxYPupHuz5yzm/wj7VpWA== + optionalDependencies: + fsevents "~2.3.2" + +safe-buffer@^5.1.0: + version "5.2.1" + resolved "https://registry.yarnpkg.com/safe-buffer/-/safe-buffer-5.2.1.tgz#1eaf9fa9bdb1fdd4ec75f58f9cdb4e6b7827eec6" + integrity sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ== + +serialize-javascript@^4.0.0: + version "4.0.0" + resolved "https://registry.yarnpkg.com/serialize-javascript/-/serialize-javascript-4.0.0.tgz#b525e1238489a5ecfc42afacc3fe99e666f4b1aa" + integrity sha512-GaNA54380uFefWghODBWEGisLZFj00nS5ACs6yHa9nLqlLpVLO8ChDGeKRjZnV4Nh4n0Qi7nhYZD/9fCPzEqkw== + dependencies: + randombytes "^2.1.0" + +source-map-support@~0.5.20: + version "0.5.21" + resolved "https://registry.yarnpkg.com/source-map-support/-/source-map-support-0.5.21.tgz#04fe7c7f9e1ed2d662233c28cb2b35b9f63f6e4f" + integrity sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w== + dependencies: + buffer-from "^1.0.0" + source-map "^0.6.0" + +source-map@^0.6.0: + version "0.6.1" + resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.6.1.tgz#74722af32e9614e9c287a8d0bbde48b5e2f1a263" + integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g== + +source-map@~0.7.2: + version "0.7.3" + resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.7.3.tgz#5302f8169031735226544092e64981f751750383" + integrity sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ== + +sourcemap-codec@^1.4.4: + version "1.4.8" + resolved "https://registry.yarnpkg.com/sourcemap-codec/-/sourcemap-codec-1.4.8.tgz#ea804bd94857402e6992d05a38ef1ae35a9ab4c4" + integrity sha512-9NykojV5Uih4lgo5So5dtw+f0JgJX30KCNI8gwhz2J9A15wD0Ml6tjHKwf6fTSa6fAdVBdZeNOs9eJ71qCk8vA== + +supports-color@^5.3.0: + version "5.5.0" + resolved "https://registry.yarnpkg.com/supports-color/-/supports-color-5.5.0.tgz#e2e69a44ac8772f78a1ec0b35b689df6530efc8f" + integrity sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow== + dependencies: + has-flag "^3.0.0" + +supports-color@^7.0.0: + version "7.2.0" + resolved "https://registry.yarnpkg.com/supports-color/-/supports-color-7.2.0.tgz#1b7dcdcb32b8138801b3e478ba6a51caa89648da" + integrity sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw== + dependencies: + has-flag "^4.0.0" + +svelte@^3.42.6: + version "3.44.2" + resolved "https://registry.yarnpkg.com/svelte/-/svelte-3.44.2.tgz#3e69be2598308dfc8354ba584cec54e648a50f7f" + integrity sha512-jrZhZtmH3ZMweXg1Q15onb8QlWD+a5T5Oca4C1jYvSURp2oD35h4A5TV6t6MEa93K4LlX6BkafZPdQoFjw/ylA== + +sveltestrap@^5.6.1: + version "5.6.3" + resolved "https://registry.yarnpkg.com/sveltestrap/-/sveltestrap-5.6.3.tgz#afb81b00d0b378719988e5339f92254dce41194f" + integrity sha512-/geTKJbPmJGzwHFKYC3NkUNDk/GKxrppgdSxcg58w/qcxs0S6RiN4PaQ1tgBKsdSrZDfbHfkFF+dybHAyUlV0A== + dependencies: + "@popperjs/core" "^2.9.2" + +terser@^5.0.0: + version "5.10.0" + resolved "https://registry.yarnpkg.com/terser/-/terser-5.10.0.tgz#b86390809c0389105eb0a0b62397563096ddafcc" + integrity sha512-AMmF99DMfEDiRJfxfY5jj5wNH/bYO09cniSqhfoyxc8sFoYIgkJy86G04UoZU5VjlpnplVu0K6Tx6E9b5+DlHA== + dependencies: + commander "^2.20.0" + source-map "~0.7.2" + source-map-support "~0.5.20" + +uplot@^1.6.7: + version "1.6.17" + resolved "https://registry.yarnpkg.com/uplot/-/uplot-1.6.17.tgz#1f8fc07a0e48008798beca463523621ad66dcc46" + integrity sha512-WHNHvDCXURn+Qwb3QUUzP6rOxx+3kUZUspREyhkqmXCxFIND99l5z9intTh+uPEt+/EEu7lCaMjSd1uTfuTXfg== + +wonka@^4.0.14, wonka@^4.0.15: + version "4.0.15" + resolved "https://registry.yarnpkg.com/wonka/-/wonka-4.0.15.tgz#9aa42046efa424565ab8f8f451fcca955bf80b89" + integrity sha512-U0IUQHKXXn6PFo9nqsHphVCE5m3IntqZNB9Jjn7EB1lrR7YTDY3YWgFvEvwniTzXSvOH/XMzAZaIfJF/LvHYXg== + +wrappy@1: + version "1.0.2" + resolved "https://registry.yarnpkg.com/wrappy/-/wrappy-1.0.2.tgz#b5243d8f3ec1aa35f1364605bc0d1036e30ab69f" + integrity sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8= diff --git a/templates/404.tmpl b/web/templates/404.tmpl similarity index 100% rename from templates/404.tmpl rename to web/templates/404.tmpl diff --git a/templates/base.tmpl b/web/templates/base.tmpl similarity index 100% rename from templates/base.tmpl rename to web/templates/base.tmpl diff --git a/templates/config.tmpl b/web/templates/config.tmpl similarity index 100% rename from templates/config.tmpl rename to web/templates/config.tmpl diff --git a/templates/home.tmpl b/web/templates/home.tmpl similarity index 100% rename from templates/home.tmpl rename to web/templates/home.tmpl diff --git a/templates/imprint.tmpl b/web/templates/imprint.tmpl similarity index 100% rename from templates/imprint.tmpl rename to web/templates/imprint.tmpl diff --git a/templates/login.tmpl b/web/templates/login.tmpl similarity index 100% rename from templates/login.tmpl rename to web/templates/login.tmpl diff --git a/templates/monitoring/analysis.tmpl b/web/templates/monitoring/analysis.tmpl similarity index 100% rename from templates/monitoring/analysis.tmpl rename to web/templates/monitoring/analysis.tmpl diff --git a/templates/monitoring/job.tmpl b/web/templates/monitoring/job.tmpl similarity index 100% rename from templates/monitoring/job.tmpl rename to web/templates/monitoring/job.tmpl diff --git a/templates/monitoring/jobs.tmpl b/web/templates/monitoring/jobs.tmpl similarity index 100% rename from templates/monitoring/jobs.tmpl rename to web/templates/monitoring/jobs.tmpl diff --git a/templates/monitoring/list.tmpl b/web/templates/monitoring/list.tmpl similarity index 100% rename from templates/monitoring/list.tmpl rename to web/templates/monitoring/list.tmpl diff --git a/templates/monitoring/node.tmpl b/web/templates/monitoring/node.tmpl similarity index 100% rename from templates/monitoring/node.tmpl rename to web/templates/monitoring/node.tmpl diff --git a/templates/monitoring/status.tmpl b/web/templates/monitoring/status.tmpl similarity index 100% rename from templates/monitoring/status.tmpl rename to web/templates/monitoring/status.tmpl diff --git a/templates/monitoring/systems.tmpl b/web/templates/monitoring/systems.tmpl similarity index 100% rename from templates/monitoring/systems.tmpl rename to web/templates/monitoring/systems.tmpl diff --git a/templates/monitoring/taglist.tmpl b/web/templates/monitoring/taglist.tmpl similarity index 100% rename from templates/monitoring/taglist.tmpl rename to web/templates/monitoring/taglist.tmpl diff --git a/templates/monitoring/user.tmpl b/web/templates/monitoring/user.tmpl similarity index 100% rename from templates/monitoring/user.tmpl rename to web/templates/monitoring/user.tmpl diff --git a/templates/privacy.tmpl b/web/templates/privacy.tmpl similarity index 100% rename from templates/privacy.tmpl rename to web/templates/privacy.tmpl