Update doc files

This commit is contained in:
Jan Eitzinger 2023-06-26 12:39:08 +02:00
parent fe78c8b851
commit 463b60acb6
5 changed files with 208 additions and 174 deletions

198
README.md
View File

@ -2,25 +2,32 @@
[![Build](https://github.com/ClusterCockpit/cc-backend/actions/workflows/test.yml/badge.svg)](https://github.com/ClusterCockpit/cc-backend/actions/workflows/test.yml) [![Build](https://github.com/ClusterCockpit/cc-backend/actions/workflows/test.yml/badge.svg)](https://github.com/ClusterCockpit/cc-backend/actions/workflows/test.yml)
This is a Golang backend implementation for a REST and GraphQL API according to the [ClusterCockpit specifications](https://github.com/ClusterCockpit/cc-specifications). This is a Golang backend implementation for a REST and GraphQL API according to
It also includes a web interface for ClusterCockpit. the [ClusterCockpit specifications](https://github.com/ClusterCockpit/cc-specifications). It also
While there is a backend for the InfluxDB timeseries database, the only tested and supported setup is using cc-metric-store as a mtric data backend. includes a web interface for ClusterCockpit. This implementation replaces the
We will add documentation how to integrate ClusterCockpit with other timeseries databases in the future. previous PHP Symfony based ClusterCockpit web interface. The reasons for
This implementation replaces the previous PHP Symfony based ClusterCockpit web-interface. switching from PHP Symfony to a Golang based solution are explained
[Here](https://github.com/ClusterCockpit/ClusterCockpit/wiki/Why-we-switched-from-PHP-Symfony-to-a-Golang-based-solution) is a discussion of the reasons why we switched from PHP Symfony to a Golang based solution. [here](https://github.com/ClusterCockpit/ClusterCockpit/wiki/Why-we-switched-from-PHP-Symfony-to-a-Golang-based-solution).
## Overview ## Overview
This is a golang web backend for the ClusterCockpit job-specific performance monitoring framework.
It provides a REST API for integrating ClusterCockpit with a HPC cluster batch system and external analysis scripts. This is a Golang web backend for the ClusterCockpit job-specific performance monitoring framework.
Data exchange between the web frontend and backend is based on a GraphQL API. It provides a REST API for integrating ClusterCockpit with an HPC cluster batch system and external analysis scripts.
Data exchange between the web front-end and the back-end is based on a GraphQL API.
The web frontend is also served by the backend using [Svelte](https://svelte.dev/) components. The web frontend is also served by the backend using [Svelte](https://svelte.dev/) components.
Layout and styling is based on [Bootstrap 5](https://getbootstrap.com/) using [Bootstrap Icons](https://icons.getbootstrap.com/). Layout and styling are based on [Bootstrap 5](https://getbootstrap.com/) using [Bootstrap Icons](https://icons.getbootstrap.com/).
The backend uses [SQLite 3](https://sqlite.org/) as relational SQL database by default.
It can optionally use a MySQL/MariaDB database server. The backend uses [SQLite 3](https://sqlite.org/) as a relational SQL database by default.
Finished batch jobs are stored in a file-based job archive following [this specification](https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive). Optionally it can use a MySQL/MariaDB database server.
The backend supports authentication using local accounts or an external LDAP directory. While there are metric data backends for the InfluxDB and Prometheus time series databases, the only tested and supported setup is to use cc-metric-store as the metric data backend.
Authorization for APIs is implemented using [JWT](https://jwt.io/) tokens created with public/private key encryption. Documentation on how to integrate ClusterCockpit with other time series databases will be added in the future.
Completed batch jobs are stored in a file-based job archive according to
[this specification] (https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive).
The backend supports authentication via local accounts, an external LDAP
directory, and JWT tokens. Authorization for APIs is implemented with
[JWT](https://jwt.io/) tokens created with public/private key encryption.
You find more detailed information here: You find more detailed information here:
* `./configs/README.md`: Infos about configuration and setup of cc-backend. * `./configs/README.md`: Infos about configuration and setup of cc-backend.
@ -28,22 +35,22 @@ You find more detailed information here:
* `./tools/README.md`: Infos on the JWT authorizatin token workflows in ClusterCockpit. * `./tools/README.md`: Infos on the JWT authorizatin token workflows in ClusterCockpit.
* `./docs`: You can find further documentation here. There is also a Hands-on tutorial that is recommended to get familiar with the ClusterCockpit setup. * `./docs`: You can find further documentation here. There is also a Hands-on tutorial that is recommended to get familiar with the ClusterCockpit setup.
**NOTICE** **NOTE**
ClusterCockpit requires a recent version of the golang toolchain and node.js. ClusterCockpit requires a current version of the golang toolchain and node.js.
You can check in `go.mod` what is the current minimal golang version required. You can check `go.mod` to see what is the current minimal golang version needed.
Homebrew and Archlinux usually have up to date golang versions. For other Linux Homebrew and Archlinux usually have current golang versions. For other Linux
distros this often means you have to install the golang compiler yourself. distros this often means that you have to install the golang compiler yourself.
Fortunatly this is easy with golang. Since a lot of functionality is based on Fortunately, this is easy with golang. Since much of the functionality is based
the go standard library it is crucial for security and performance to use a on the Go standard library, it is crucial for security and performance to use a
recent golang version. Also an old golang tool chain may restrict the supported current version of golang. In addition, an old golang toolchain may limit the supported
versions of third party packages. versions of third-party packages.
## Demo Setup ## How to try ClusterCockpit with a demo setup.
We provide a shell skript that downloads demo data and automatically builds and We provide a shell script that downloads demo data and automatically starts the
starts cc-backend. You need `wget`, `go`, `node`, `npm` in your path to start cc-backend. You will need `wget`, `go`, `node`, `npm` in your path to
the demo. The demo will download 32MB of data (223MB on disk). start the demo. The demo downloads 32MB of data (223MB on disk).
```sh ```sh
git clone https://github.com/ClusterCockpit/cc-backend.git git clone https://github.com/ClusterCockpit/cc-backend.git
@ -51,17 +58,18 @@ cd ./cc-backend
./startDemo.sh ./startDemo.sh
``` ```
You can access the web interface at http://localhost:8080. You can access the web interface at http://localhost:8080.
Credentials for login: `demo:demo`. Credentials for login are `demo:demo`.
Please note that some views do not work without a metric backend (e.g., the Systems and Status view). Please note that some views do not work without a metric backend (e.g., the
Systems and Status views).
## Howto Build and Run ## Howto build and run
There is a Makefile to automate the build of cc-backend. The Makefile supports the following targets: There is a Makefile to automate the build of cc-backend. The Makefile supports the following targets:
* `$ make`: Initialize `var` directory and build svelte frontend and backend binary. Please note that there is no proper prerequesite handling. Any change of frontend source files will trigger a complete rebuild. * `$ make`: Initialize `var` directory and build svelte frontend and backend binary. Note that there is no proper prerequesite handling. Any change of frontend source files will result in a complete rebuild.
* `$ make clean`: Clean go build cache and remove binary * `$ make clean`: Clean go build cache and remove binary.
* `$ make test`: Run the tests that are also run in the GitHub workflow setup. * `$ make test`: Run the tests that are also run in the GitHub workflow setup.
A common workflow to setup cc-backend fron scratch is: A common workflow for setting up cc-backend from scratch is:
```sh ```sh
git clone https://github.com/ClusterCockpit/cc-backend.git git clone https://github.com/ClusterCockpit/cc-backend.git
@ -72,87 +80,109 @@ make
# EDIT THE .env FILE BEFORE YOU DEPLOY (Change the secrets)! # EDIT THE .env FILE BEFORE YOU DEPLOY (Change the secrets)!
# If authentication is disabled, it can be empty. # If authentication is disabled, it can be empty.
cp configs/env-template.txt .env cp configs/env-template.txt .env
vim ./.env vim .env
cp configs/config.json ./ cp configs/config.json .
vim ./config.json vim config.json
#Optional: Link an existing job archive: #Optional: Link an existing job archive:
ln -s <your-existing-job-archive> ./var/job-archive ln -s <your-existing-job-archive> ./var/job-archive
# This will first initialize the job.db database by traversing all # This will first initialize the job.db database by traversing all
# `meta.json` files in the job-archive and add a new user. `--no-server` will cause the # `meta.json` files in the job-archive and add a new user.
# executable to stop once it has done that instead of starting a server. ./cc-backend -init-db -add-user <your-username>:admin:<your-password>
./cc-backend --init-db --add-user <your-username>:admin:<your-password>
# Start a HTTP server (HTTPS can be enabled, the default port is 8080). # Start a HTTP server (HTTPS can be enabled in the configuration, the default port is 8080).
# The --dev flag enables GraphQL Playground (http://localhost:8080/playground) and Swagger UI (http://localhost:8080/swagger). # The --dev flag enables GraphQL Playground (http://localhost:8080/playground) and Swagger UI (http://localhost:8080/swagger).
./cc-backend --server --dev ./cc-backend -server -dev
# Show other options: # Show other options:
./cc-backend --help ./cc-backend -help
``` ```
### Run as systemd daemon ### Run as systemd daemon
In order to run this program as a daemon, cc-backend ships with an [example systemd setup](./init/README.md). To run this program as a daemon, cc-backend comes with a [example systemd setup](./init/README.md).
## Configuration and Setup ## Configuration and setup
cc-backend can be used as a local web-interface for an existing job archive or as a general web-interface server for a live ClusterCockpit Monitoring framework. cc-backend can be used as a local web interface for an existing job archive or
as a server for the ClusterCockpit monitoring framework.
Create your job-archive according to [this specification](https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive). Create your job archive according to [this specification] (https://github.com/ClusterCockpit/cc-specifications/tree/master/job-archive).
At least one cluster with a valid `cluster.json` file is required. At least one cluster directory with a valid `cluster.json` file is required. If
Having no jobs in the job-archive at all is fine. you configure the job archive from scratch, you must also create the job
archive version file that contains the job archive version as an integer.
You can retrieve the currently supported version by running the following
command:
```
$ ./cc-backend -version
```
It is ok to have no jobs in the job archive.
### Configuration ### Configuration
A config file in the JSON format has to be provided using `--config` to override the defaults. A configuration file in JSON format must be specified with `-config` to override the default settings.
By default, if there is a `config.json` file in the current directory of the `cc-backend` process, it will be loaded even without the `--config` flag. By default, a `config.json` file located in the current directory of the `cc-backend` process will be loaded even without the `-config` flag.
You find documentation of all supported configuration and command line options [here](./configs/README.md). Documentation of all supported configuration and command line options can be found [here](./configs/README.md).
## Database initialization and migration ## Database initialization and migration
Every cc-backend version supports a specific database version. Each `cc-backend` version supports a specific database version.
On startup the version of the sqlite database is validated and cc-backend will terminate if the version does not match. At startup, the version of the sqlite database is checked and `cc-backend` terminates if the version does not match.
cc-backend supports to migrate the database schema up to the required version using the `--migrate-db` command line option. `cc-backend` supports the migration of the database schema to the required version with the command line option `-migrate-db`.
In case the database file does not yet exist it is created and initialized by the `--migrate-db` command line option. If the database file does not exist yet, it will be created and initialized with the command line option `-migrate-db`.
In case you want to use a newer database version with an older version of cc-backend you can downgrade a database using the external [migrate](https://github.com/golang-migrate/migrate) tool. If you want to use a newer database version with an older version of cc-backend, you can downgrade a database with the external tool [migrate](https://github.com/golang-migrate/migrate).
In this case you have to provide the path to the migration files in a recent source tree: `./internal/repository/migrations/`. In this case, you must specify the path to the migration files in a current source tree: `./internal/repository/migrations/`.
## Development ## Development and testing
In case the REST or GraphQL API is changed the according code generators have to be used. When making changes to the REST or GraphQL API, the appropriate code generators must be used.
You must always rebuild `cc-backend` after updating the API files.
### Update GraphQL schema ### Update GraphQL schema
This project uses [gqlgen](https://github.com/99designs/gqlgen) for the GraphQL API. This project uses [gqlgen](https://github.com/99designs/gqlgen) for the GraphQL API.
The schema can be found in `./api/schema.graphqls`. The schema can be found in `./api/schema.graphqls`.
After changing it, you need to run `go run github.com/99designs/gqlgen` which will update `./internal/graph/model`. After changing it, you need to run `go run github.com/99designs/gqlgen`, which will update `./internal/graph/model`.
In case new resolvers are needed, they will be inserted into `./internal/graph/schema.resolvers.go`, where you will need to implement them. If new resolvers are needed, they will be added to `./internal/graph/schema.resolvers.go`, where you will then need to implement them.
If you start cc-backend with flag `--dev` the GraphQL Playground UI is available at http://localhost:8080/playground . If you start `cc-backend` with the `-dev` flag, the GraphQL Playground UI is available at http://localhost:8080/playground.
### Update Swagger UI ### Update Swagger UI
This project integrates [swagger ui](https://swagger.io/tools/swagger-ui/) to document and test its REST API. This project integrates [swagger ui] (https://swagger.io/tools/swagger-ui/) to document and test its REST API.
The swagger doc files can be found in `./api/`. The swagger documentation files can be found in `./api/`.
You can generate the configuration of swagger-ui by running `go run github.com/swaggo/swag/cmd/swag init -d ./internal/api,./pkg/schema -g rest.go -o ./api `. You can generate the swagger-ui configuration by running `go run github.com/swaggo/swag/cmd/swag init -d ./internal/api,./pkg/schema -g rest.go -o ./api `.
You need to move the generated `./api/doc.go` to `./internal/api/doc.go`. You need to move the created `./api/doc.go` to `./internal/api/doc.go`.
If you start cc-backend with flag `--dev` the Swagger UI is available at http://localhost:8080/swagger/ . If you start cc-backend with the `-dev` flag, the Swagger interface is available
You have to enter a JWT key for a user with role API. at http://localhost:8080/swagger/.
You must enter a JWT key for a user with the API role.
**NOTICE** The user owning the JWT token must not be logged in the same browser (have a running session), otherwise Swagger requests will not work. It is recommended to create a separate user that has just the API role. **NOTE**
## Project Structure The user who owns the JWT key must not be logged into the same browser (have a
running session), or the Swagger requests will not work. It is recommended to
create a separate user that has only the API role.
## Development and testing
In case the REST or GraphQL API is changed the according code generators have to be used.
## Project file structure
- [`api/`](https://github.com/ClusterCockpit/cc-backend/tree/master/api) contains the API schema files for the REST and GraphQL APIs. The REST API is documented in the OpenAPI 3.0 format in [./api/openapi.yaml](./api/openapi.yaml).
- [`cmd/cc-backend`](https://github.com/ClusterCockpit/cc-backend/tree/master/cmd/cc-backend) contains `main.go` for the main application.
- [`configs/`](https://github.com/ClusterCockpit/cc-backend/tree/master/configs) contains documentation about configuration and command line options and required environment variables. A sample configuration file is provided.
- [`docs/`](https://github.com/ClusterCockpit/cc-backend/tree/master/docs) contains more in-depth documentation.
- [`init/`](https://github.com/ClusterCockpit/cc-backend/tree/master/init) contains an example of setting up systemd for production use.
- [`internal/`](https://github.com/ClusterCockpit/cc-backend/tree/master/internal) contains library source code that is not intended for use by others.
- [`pkg/`](https://github.com/ClusterCockpit/cc-backend/tree/master/pkg) contains Go packages that can be used by other projects.
- [`tools/`](https://github.com/ClusterCockpit/cc-backend/tree/master/tools) Additional command line helper tools.
- [`archive-manager`](https://github.com/ClusterCockpit/cc-backend/tree/master/tools/archive-manager) Commands for getting infos about and existing job archive.
- [`archive-migration`](https://github.com/ClusterCockpit/cc-backend/tree/master/tools/archive-migration) Tool to migrate from previous to current job archive version.
- [`convert-pem-pubkey`](https://github.com/ClusterCockpit/cc-backend/tree/master/tools/convert-pem-pubkey) Tool to convert external pubkey for use in `cc-backend`.
- [`gen-keypair`](https://github.com/ClusterCockpit/cc-backend/tree/master/tools/gen-keypair) contains a small application to generate a compatible JWT keypair. You find documentation on how to use it [here](https://github.com/ClusterCockpit/cc-backend/blob/master/docs/JWT-Handling.md).
- [`web/`](https://github.com/ClusterCockpit/cc-backend/tree/master/web) Server-side templates and frontend-related files:
- [`frontend`](https://github.com/ClusterCockpit/cc-backend/tree/master/web/frontend) Svelte components and static assets for the frontend UI
- [`templates`](https://github.com/ClusterCockpit/cc-backend/tree/master/web/templates) Server-side Go templates
- [`gqlgen.yml`](https://github.com/ClusterCockpit/cc-backend/blob/master/gqlgen.yml) Configures the behaviour and generation of [gqlgen](https://github.com/99designs/gqlgen).
- [`startDemo.sh`](https://github.com/ClusterCockpit/cc-backend/blob/master/startDemo.sh) is a shell script that sets up demo data, and builds and starts `cc-backend`.
- `api/` contains the API schema files for the REST and GraphQL APIs. The REST API is documented in the OpenAPI 3.0 format in [./api/openapi.yaml](./api/openapi.yaml).
- `cmd/cc-backend` contains `main.go` for the main application.
- `cmd/gen-keypair` contains is a small application to generate a compatible JWT keypair includin a README about JWT setup in ClusterCockpit.
- `configs/` contains documentation about configuration and command line options and required environment variables. An example configuration file is provided.
- `init/` contains an example systemd setup for production use.
- `internal/` contains library source code that is not intended to be used by others.
- `pkg/` contains go packages that can also be used by other projects.
- `test/` Test apps and test data.
- `web/` Server side templates and frontend related files:
- `templates` Serverside go templates
- `frontend` Svelte components and static assets for frontend UI
- `gqlgen.yml` configures the behaviour and generation of [gqlgen](https://github.com/99designs/gqlgen).
- `startDemo.sh` is a shell script that sets up demo data, and builds and starts cc-backend.

View File

@ -1,10 +1,10 @@
## Intro ## Intro
cc-backend requires a configuration file specifying the cluster systems to be used. Still many default cc-backend requires a configuration file that specifies the cluster systems to be used.
options documented below are used. cc-backend tries to load a config.json from the working directory per default. To override the default, specify the location of a json configuration file with the `-config <file path>` command line option.
To overwrite the default specify a json config file location using the command line option `--config <filepath>`. All security-related configurations, e.g. keys and passwords, are set using
All security relevant configuration. e.g., keys and passwords, are set using environment variables. environment variables.
It is supported to specify these by means of an `.env` file located in the project root. It is supported to set these by means of a `.env` file in the project root.
## Configuration Options ## Configuration Options
@ -19,12 +19,12 @@ It is supported to specify these by means of an `.env` file located in the proje
* `job-archive`: Type string. Path to the job-archive. Default: `./var/job-archive`. * `job-archive`: Type string. Path to the job-archive. Default: `./var/job-archive`.
* `disable-archive`: Type bool. Keep all metric data in the metric data repositories, do not write to the job-archive. Default `false`. * `disable-archive`: Type bool. Keep all metric data in the metric data repositories, do not write to the job-archive. Default `false`.
* `validate`: Type bool. Validate all input json documents against json schema. * `validate`: Type bool. Validate all input json documents against json schema.
* `"session-max-age`: Type string. Specifies for how long a session shall be valid as a string parsable by time.ParseDuration(). If 0 or empty, the session/token does not expire! Default `168h`. * `session-max-age`: Type string. Specifies for how long a session shall be valid as a string parsable by time.ParseDuration(). If 0 or empty, the session/token does not expire! Default `168h`.
* `"jwt-max-age`: Type string. Specifies for how long a JWT token shall be valid as a string parsable by time.ParseDuration(). If 0 or empty, the session/token does not expire! Default `0`. * `jwt-max-age`: Type string. Specifies for how long a JWT token shall be valid as a string parsable by time.ParseDuration(). If 0 or empty, the session/token does not expire! Default `0`.
* `https-cert-file` and `https-key-file`: Type string. If both those options are not empty, use HTTPS using those certificates. * `https-cert-file` and `https-key-file`: Type string. If both those options are not empty, use HTTPS using those certificates.
* `redirect-http-to`: Type string. If not the empty string and `addr` does not end in ":80", redirect every request incoming at port 80 to that url. * `redirect-http-to`: Type string. If not the empty string and `addr` does not end in ":80", redirect every request incoming at port 80 to that url.
* `machine-state-dir`: Type string. Where to store MachineState files. TODO: Explain in more detail! * `machine-state-dir`: Type string. Where to store MachineState files. TODO: Explain in more detail!
* `"stop-jobs-exceeding-walltime`: Type int. If not zero, automatically mark jobs as stopped running X seconds longer than their walltime. Only applies if walltime is set for job. Default `0`. * `stop-jobs-exceeding-walltime`: Type int. If not zero, automatically mark jobs as stopped running X seconds longer than their walltime. Only applies if walltime is set for job. Default `0`.
* `short-running-jobs-duration`: Type int. Do not show running jobs shorter than X seconds. Default `300`. * `short-running-jobs-duration`: Type int. Do not show running jobs shorter than X seconds. Default `300`.
* `ldap`: Type object. For LDAP Authentication and user synchronisation. Default `nil`. * `ldap`: Type object. For LDAP Authentication and user synchronisation. Default `nil`.
- `url`: Type string. URL of LDAP directory server. - `url`: Type string. URL of LDAP directory server.
@ -73,4 +73,4 @@ An example env file is found in this directory. Copy it to `.env` in the project
* `SESSION_KEY`: Some random bytes used as secret for cookie-based sessions. * `SESSION_KEY`: Some random bytes used as secret for cookie-based sessions.
* `LDAP_ADMIN_PASSWORD`: The LDAP admin user password (optional). * `LDAP_ADMIN_PASSWORD`: The LDAP admin user password (optional).
* `CROSS_LOGIN_JWT_HS512_KEY`: Used for token based logins via another authentication service. * `CROSS_LOGIN_JWT_HS512_KEY`: Used for token based logins via another authentication service.
* `LOGLEVEL`: Can be `err`, `warn`, `info` or `debug` (optional, `debug` by default). Can be used to reduce logging. * `LOGLEVEL`: Can be `err`, `warn`, `info` or `debug` (optional, `warn` by default). Can be used to reduce logging.

View File

@ -1,22 +1,22 @@
# Release versioning # Release versions
Releases are numbered with an integer ID, starting with 1. Versions are marked according to [semantic versioning] (https://semver.org).
Each release embeds the following assets in the binary: Each version embeds the following static assets in the binary:
* Web front-end with Javascript files and all static assets. * Web frontend with javascript files and all static assets.
* Golang template files for server-side rendering. * Golang template files for server-side rendering.
* JSON schema files for validation. * JSON schema files for validation.
* Database migration files * Database migration files.
The remaining external assets are: The remaining external assets are:
* The SQL database used * The SQL database used.
* The job archive * The job archive
* The configuration file `config.json` * The configuration files `config.json` and `.env`.
Both external assets are also versioned with integer IDs. The external assets are versioned with integer IDs.
This means that each release binary is bound to specific versions of the SQL This means that each release binary is bound to specific versions of the SQL
database and the job archive. database and the job archive.
The configuration file is validated against the current schema on startup. The configuration file is checked against the current schema at startup.
The command line switch `-migrate-db` can be used to upgrade the SQL database The `-migrate-db` command line switch can be used to upgrade the SQL database
to migrate from a previous version to the latest one. to migrate from a previous version to the latest one.
We offer a separate tool `archive-migration` to migrate an existing job archive We offer a separate tool `archive-migration` to migrate an existing job archive
archive from the previous to the latest version. archive from the previous to the latest version.
@ -24,14 +24,15 @@ archive from the previous to the latest version.
# Versioning of APIs # Versioning of APIs
cc-backend provides two API backends: cc-backend provides two API backends:
* A REST API for querying jobs * A REST API for querying jobs.
* A GraphQL API for data exchange between web frontend and cc-backend * A GraphQL API for data exchange between web frontend and cc-backend.
Both APIs will also be versioned. We still need to decide wether we will also support The REST API will also be versioned. We still have to decide whether we will also
older REST API version by versioning the endpoint URLs. support older REST API versions by versioning the endpoint URLs.
The GraphQL API is for internal use and will not be versioned.
# How to build # How to build
Please always build `cc-backend` with the supplied Makefile. This will ensure In general it is recommended to use the provided release binary.
that the frontend is also built correctly and that the version in the binary file is coded In case you want to build build `cc-backend` please always use the provided makefile. This will ensure
in the binary. that the frontend is also built correctly and that the version in the binary is encoded in the binary.

View File

@ -1,9 +1,9 @@
# CC-HANDSON - Setup ClusterCockpit from scratch (w/o docker) # Hands-on setup ClusterCockpit from scratch (w/o docker)
## Prerequisites ## Prerequisites
* Perl * perl
* Yarn * go
* Go * npm
* Optional: curl * Optional: curl
* Script migrateTimestamp.pl * Script migrateTimestamp.pl
@ -33,22 +33,17 @@ Start by creating a base folder for all of the following steps.
* Clone Repository * Clone Repository
- `git clone https://github.com/ClusterCockpit/cc-backend.git` - `git clone https://github.com/ClusterCockpit/cc-backend.git`
- `cd cc-backend` - `cd cc-backend`
* Setup Frontend * Build
- `cd ./web/frontend` - `make`
- `yarn install` * Activate & configure environment for cc-backend
- `yarn build`
- `cd ../..`
* Build Go Executable
- `go build ./cmd/cc-backend/`
* Activate & Config environment for cc-backend
- `cp configs/env-template.txt .env` - `cp configs/env-template.txt .env`
- Optional: Have a look via `vim ./.env` - Optional: Have a look via `vim .env`
- Copy the `config.json` file included in this tarball into the root directory of cc-backend: `cp ../../config.json ./` - Copy the `config.json` file included in this tarball into the root directory of cc-backend: `cp ../../config.json ./`
* Back to toplevel `clustercockpit` * Back to toplevel `clustercockpit`
- `cd ..` - `cd ..`
* Prepare Datafolder and Database file * Prepare Datafolder and Database file
- `mkdir var` - `mkdir var`
- `./cc-backend --migrate-db` - `./cc-backend -migrate-db`
### Setup cc-metric-store ### Setup cc-metric-store
* Clone Repository * Clone Repository
@ -112,7 +107,7 @@ Done for checkpoints
- `cp source-data/job-archive-source/woody/cluster.json cc-backend/var/job-archive/woody/` - `cp source-data/job-archive-source/woody/cluster.json cc-backend/var/job-archive/woody/`
* Initialize Job-Archive in SQLite3 job.db and add demo user * Initialize Job-Archive in SQLite3 job.db and add demo user
- `cd cc-backend` - `cd cc-backend`
- `./cc-backend --init-db --add-user demo:admin:AdminDev` - `./cc-backend -init-db -add-user demo:admin:demo`
- Expected output: - Expected output:
``` ```
<6>[INFO] new user "demo" created (roles: ["admin"], auth-source: 0) <6>[INFO] new user "demo" created (roles: ["admin"], auth-source: 0)
@ -123,7 +118,7 @@ Done for checkpoints
- `cd ..` - `cd ..`
### Startup both Apps ### Startup both Apps
* In cc-backend root: `$./cc-backend --server --dev` * In cc-backend root: `$./cc-backend -server -dev`
- Starts Clustercockpit at `http:localhost:8080` - Starts Clustercockpit at `http:localhost:8080`
- Log: `<6>[INFO] HTTP server listening at :8080...` - Log: `<6>[INFO] HTTP server listening at :8080...`
- Use local internet browser to access interface - Use local internet browser to access interface
@ -161,7 +156,7 @@ Content-Length: 119
``` ```
### Development API web interfaces ### Development API web interfaces
The `--dev` flag enables web interfaces to document and test the apis: The `-dev` flag enables web interfaces to document and test the apis:
* http://localhost:8080/playground - A GraphQL playground. To use it you must have a authenticated session in the same browser. * http://localhost:8080/playground - A GraphQL playground. To use it you must have a authenticated session in the same browser.
* http://localhost:8080/swagger - A Swagger UI. To use it you have to be logged out, so no user session in the same browser. Use the JWT token with role Api generate previously to authenticate via http header. * http://localhost:8080/swagger - A Swagger UI. To use it you have to be logged out, so no user session in the same browser. Use the JWT token with role Api generate previously to authenticate via http header.

View File

@ -1,71 +1,79 @@
# How to run this as a systemd service # How to run `cc-backend` as a systemd service.
The files in this directory assume that you install ClusterCockpit to `/opt/monitoring`. The files in this directory assume that you install ClusterCockpit to
Of course you can choose any other location, but make sure to replace all paths that begin with `/opt/monitoring` in the `clustercockpit.service` file! `/opt/monitoring/cc-backend`.
Of course you can choose any other location, but make sure you replace all paths
starting with `/opt/monitoring/cc-backend` in the `clustercockpit.service` file!
If you have not installed [yarn](https://yarnpkg.com/getting-started/install) and [go](https://go.dev/doc/install) already, do that (Golang is available in most package managers). The `config.json` may contain the optional fields *user* and *group*. If
It is recommended and easy to install the most recent stable version of Golang as every version also improves the Golang standard library. specified, the application will call
[setuid](https://man7.org/linux/man-pages/man2/setuid.2.html) and
The `config.json` can have the optional fields *user* and *group*. [setgid](https://man7.org/linux/man-pages/man2/setgid.2.html) after reading the
If provided, the application will call [setuid](https://man7.org/linux/man-pages/man2/setuid.2.html) and [setgid](https://man7.org/linux/man-pages/man2/setgid.2.html) after having read the config file and having bound to a TCP port (so that it can take a privileged port), but before it starts accepting any connections. config file and binding to a TCP port (so it can take a privileged port), but
This is good for security, but means that the directories `web/frontend/public`, `var/` and `web/templates/` must be readable by that user and `var/` writable as well (All paths relative to the repos root). before it starts accepting any connections. This is good for security, but also
The `.env` and `config.json` files might contain secrets and should not be readable by that user. means that the `var/` directory must be readable and writeable by this user.
If those files are changed, the server has to be restarted. The `.env` and `config.json` files may contain secrets and should not be
readable by this user. If these files are changed, the server must be restarted.
```sh ```sh
# 1.: Clone this repository to /opt/monitoring # 1. Clone this repository somewhere in your home
git clone git@github.com:ClusterCockpit/cc-backend.git /opt/monitoring git clone git@github.com:ClusterCockpit/cc-backend.git <DSTDIR>
# 2.: Install all dependencies and build everything # 2. (Optional) Install dependencies and build. In general it is recommended to use the provided release binaries.
cd /mnt/monitoring cd <DSTDIR>
go get && go build cmd/cc-backend && (cd ./web/frontend && yarn install && yarn build) make
sudo mkdir -p /opt/monitoring/cc-backend/
cp ./cc-backend /opt/monitoring/cc-backend/
# 3.: Modify the `./config.json` and env-template.txt file from the configs directory to your liking and put it in the repo root # 3. Modify the `./config.json` and env-template.txt file from the configs directory to your liking and put it in the target directory
cp ./configs/config.json ./config.json cp ./configs/config.json /opt/monitoring/config.json
cp ./configs/env-template.txt ./.env cp ./configs/env-template.txt /opt/monitoring/.env
vim ./config.json # do your thing... vim /opt/monitoring/config.json # do your thing...
vim ./.env # do your thing... vim /opt/monitoring/.env # do your thing...
# 4.: Add the systemd service unit file (in case /opt/ is mounted on another file system it may be better to copy the file to /etc) # 4. (Optional) Customization: Add your versions of the login view, legal texts, and logo image.
sudo ln -s /mnt/monitoring/init/clustercockpit.service /etc/systemd/system/clustercockpit.service # You may use the templates in `./web/templates` as blueprint. Every overwrite separate.
cp login.tmpl /opt/monitoring/cc-backend/var/
cp imprint.tmpl /opt/monitoring/cc-backend/var/
cp privacy.tmpl /opt/monitoring/cc-backend/var/
# Ensure your logo, and any images you use in your login template has a suitable size.
cp -R img /opt/monitoring/cc-backend/img
# 5.: Enable and start the server # 5. Copy the systemd service unit file. You may adopt it to your needs.
sudo cp ./init/clustercockpit.service /etc/systemd/system/clustercockpit.service
# 6. Enable and start the server
sudo systemctl enable clustercockpit.service # optional (if done, (re-)starts automatically) sudo systemctl enable clustercockpit.service # optional (if done, (re-)starts automatically)
sudo systemctl start clustercockpit.service sudo systemctl start clustercockpit.service
# Check whats going on: # Check whats going on:
sudo systemctl status clustercockpit.service
sudo journalctl -u clustercockpit.service sudo journalctl -u clustercockpit.service
``` ```
# Recommended deployment workflow # Recommended workflow for deployment
It is recommended to install all ClusterCockpit components in a common durectory, this can be something like `/opt/monitoring`, `var/monitoring` or `var/clustercockpit`. It is recommended to install all ClusterCockpit components in a common directory, e.g. `/opt/monitoring`, `var/monitoring` or `var/clustercockpit`.
In the following we are using `/opt/monitoring`. In the following we use `/opt/monitoring`.
Two systemd services are running on the central monitoring server: Two systemd services run on the central monitoring server:
* clustercockpit : binary cc-backend in `/opt/monitoring/cc-backend`.
* cc-metric-store : Binary cc-metric-store in `/opt/monitoring/cc-metric-store`.
clustercockpit : Binary cc-backend in `/opt/monitoring/cc-backend` ClusterCockpit is deployed as a single binary that embeds all static assets.
cc-metric-store: Binary cc-metric-store in `/opt/monitoring/cc-metric-store` We recommend keeping all `cc-backend` binary versions in a folder `archive` and
linking the currently active one from the `cc-backend` root.
ClusterCockpit is deployed as a single file binary that embeds all static assets. This allows for easy roll-back in case something doesn't work.
We recommend to keep all binaries in a folder `archive` and link the currently active from cc-backend root.
This allows to easily roll-back in case something breaks.
## Workflow to deploy new version ## Workflow to deploy new version
This example assumes the DB and job archive did not change. This example assumes the DB and job archive versions did not change.
* Stop systemd service: `$ sudo systemctl stop clustercockpit.service`
* Backup the sqlite DB file and Job archive directory tree! * Backup the sqlite DB file and Job archive directory tree!
* Clone cc-backend source tree (e.g. in your home directory) * Copy `cc-backend` binary to `/opt/monitoring/cc-backend/archive` (Tip: Use a
* Copy the adapted legal text files into the git source tree (./web/templates). date tag like `YYYYMMDD-cc-backend`)
* Build cc-backend: * Link from cc-backend root to current version
``` * Start systemd service: `$ sudo systemctl start clustercockpit.service`
$ cd web/frontend * Check if everything is ok: `$ sudo systemctl status clustercockpit.service`
$ yarn && yarn build
$ cd ../../
$ go build ./cmd/cc-backend
```
* Copy `cc-backend` binary to `/opt/monitoring/cc-backend/archive`
* Link from cc-backend root to recent version
* Restart systemd service: `$ sudo systemctl restart clustercockpit.service`
* Check log for issues: `$ sudo journalctl -u clustercockpit.service` * Check log for issues: `$ sudo journalctl -u clustercockpit.service`
* Check the ClusterCockpit web frontend and your Slurm adapters if anything is broken! * Check the ClusterCockpit web frontend and your Slurm adapters if anything is broken!