Merge pull request #1 from ClusterCockpit/develop-new

Reinit cc-docker
This commit is contained in:
Jan Eitzinger 2023-06-22 08:56:13 +02:00 committed by GitHub
commit db0571dab1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 623 additions and 2616 deletions

8
.gitignore vendored
View File

@ -1,6 +1,10 @@
data/job-archive data/job-archive
data/job-archive/** data/job-archive/**
data/symfony
data/symfony/**
data/influxdb data/influxdb
data/sqldata data/sqldata
data/cc-metric-store
cc-backend
cc-backend/**
.vscode
docker-compose.yml
.env

108
README.md
View File

@ -1,89 +1,69 @@
# cc-docker # cc-docker
This is a `docker compose` setup to try out the complete ClusterCockpit Application Stack including all external components. This docker setup can be easily configured to be used as demo or as a development environment. **Please note: This repo is under ongoing construction**
For a docker setup targeted to server environment you may have a look at https://github.com/ClusterCockpit/cc-docker-server .
This is a `docker-compose` setup which provides a quickly started environment for ClusterCockpit development and testing, using the modules `cc-backend` (GoLang) and `cc-frontend` (Svelte). A number of services is readily available as docker container (nats, cc-metric-store, InfluxDB, LDAP), or easily added by manual configuration (MySQL).
It includes the following containers: It includes the following containers:
* mysql * nats (Default)
* php-fpm * cc-metric-store (Default)
* nginx * influxdb (Default)
* redis * openldap (Default)
* openldap * mysql (Optional)
* influxdb * mariadb (Optional)
* phpmyadmin * phpmyadmin (Optional)
Settings are configured in `.env`. The setup comes with fixture data for a Job archive, cc-metric-store checkpoints, InfluxDB, MySQL, and a LDAP user directory.
The setup comes with fixture data for a Job archive, InfluxDB, MySQL, and a LDAP user directory.
## Known Issues ## Known Issues
* `docker-compose` installed on Ubuntu (18.04, 20.04) via `apt-get` can not correctly parse `docker-compose.yml` due to version differences. Install latest version of `docker-compose` from https://docs.docker.com/compose/install/ instead. * `docker-compose` installed on Ubuntu (18.04, 20.04) via `apt-get` can not correctly parse `docker-compose.yml` due to version differences. Install latest version of `docker-compose` from https://docs.docker.com/compose/install/ instead.
* You need to ensure that no other web server is running on port 80 (e.g. Apache2). If port 80 is already in use, edit NGINX_PORT environment variable in `.env`. * You need to ensure that no other web server is running on ports 8080 (cc-backend), 8081 (phpmyadmin), 8084 (cc-metric-store), 8086 (nfluxDB), 4222 and 8222 (Nats), or 3306 (MySQL). If one or more ports are already in use, you habe to adapt the related config accordingly.
* Existing VPN connections sometimes cause problems with docker. If `docker-compose` does not start up correctly, try disabling any active VPN connection. Refer to https://stackoverflow.com/questions/45692255/how-make-openvpn-work-with-docker for further information. * Existing VPN connections sometimes cause problems with docker. If `docker-compose` does not start up correctly, try disabling any active VPN connection. Refer to https://stackoverflow.com/questions/45692255/how-make-openvpn-work-with-docker for further information.
## Configuration ## Configuration Templates
The main branch of this repository will work with the latest ClusterCockpit release (main branch) including the fixture data. Located in `./templates`
For ClusterCockpit development on the develop branch use the cc-docker develop branch. * `docker-compose.yml.default`: Docker-Compose file to setup cc-metric-store, InfluxDB, MariaDB, PhpMyadmin, and LDAP containers (Default). Used in `setupDev.sh`.
* `docker-compose.yml.mysql`: Docker-Compose configuration template if MySQL is desired instead of MariaDB.
While many aspects of this docker compose setup can be configured you usually only need to adapt the following three settings in `.env`: * `env.default`: Environment variables for setup with cc-metric-store, InfluxDB, MariaDB, PhpMyadmin, and LDAP containers (Default). Used in `setupDev.sh`.
* `CLUSTERCOCKPIT_BRANCH`: The branch to checkout from ClusterCockpit git repository. May also be a tag. This should not be changed as it may be that the fixture data may not be compatible between stable and develop. * `env.mysql`: Additional environment variables required if MySQL is desired instead of MariaDB.
* `APP_CLUSTERCOCKPIT_INIT` (Default: true): Wether the Symfony tree (located at `./data/symfony`) should be deleted and freshly cloned and initialized on every container startup.
* `APP_ENVIRONMENT` (Default: `dev`): The Symfony app environment. With `dev` you get the symfony debug toolbar and more extensive error handling. The `prod` environment is a setup for productions use.
## Setup ## Setup
* `$ cd data` 1. Clone `cc-backend` repository in chosen base folder: `$> git clone https://github.com/ClusterCockpit/cc-backend.git`
* `$ ./init.sh`: **NOTICE** The script will download files of a total size of 338MB (mostly for the InfluxDB data).
If you want to test the REST API and also write to the job archive from Cluster Cockpit you have to comment out the following lines in `./data/init.sh`: 2. Run `$ ./setupDev.sh`: **NOTICE** The script will download files of a total size of 338MB (mostly for the InfluxDB data).
3. The setup-script launches the supporting container stack in the background automatically if everything went well. Run `$> ./cc-backend/cc-backend` to start `cc-backend.`
4. By default, you can access `cc-backend` in your browser at `http://localhost:8080`. You can shut down the cc-backend server by pressing `CTRL-C`, remember to also shut down all containers via `$> docker-compose down` afterwards.
5. You can restart the containers with: `$> docker-compose up -d`.
## Post-Setup Adjustment for using `influxdb`
When using `influxdb` as a metric database, one must adjust the following files:
* `cc-backend/var/job-archive/emmy/cluster.json`
* `cc-backend/var/job-archive/woody/cluster.json`
In the JSON, exchange the content of the `metricDataRepository`-Entry (By default configured for `cc-metric-store`) with:
``` ```
echo "This script needs to chown the job-archive directory so that the application can write to it:" "metricDataRepository": {
sudo chown -R 82:82 ./job-archive "kind": "influxdb",
"url": "http://localhost:8086",
"token": "egLfcf7fx0FESqFYU3RpAAbj",
"bucket": "ClusterCockpit",
"org": "ClusterCockpit",
"skiptls": false
}
``` ```
After that from the root of the cc-docker sandbox you can start up the containers with:
* `$ docker-compose up`
* Wait... and wait a little longer
Before you can use ClusterCockpit the following disclaimer must be shown. To download and build all ClusterCockpit components may take up to several minutes:
```
-------------------- ---------------------------------
Symfony
-------------------- ---------------------------------
Version 5.3.7
Long-Term Support No
End of maintenance 01/2022 (in +140 days)
End of life 01/2022 (in +140 days)
-------------------- ---------------------------------
Kernel
-------------------- ---------------------------------
Type App\Kernel
Environment dev
Debug true
Charset UTF-8
Cache directory ./var/cache/dev (6.5 MiB)
Build directory ./var/cache/dev (6.5 MiB)
Log directory ./var/log (249 B)
-------------------- ---------------------------------
PHP
-------------------- ---------------------------------
Version 8.0.10
Architecture 64 bits
Intl locale n/a
Timezone UTC (2021-09-13T09:41:33+00:00)
OPcache true
APCu false
Xdebug false
-------------------- ---------------------------------
```
By default, you can access ClusterCockpit in your browser at `http://localhost`. If the `NGINX_PORT` environment variable was changed, you have to use `http://localhost:$PORT` . You can shutdown the containers by pressing `CTRL-C`. Refer to the common docker documentation how to start the environment in the background.
## Usage ## Usage
Credentials for the preconfigured admin user are: Credentials for the preconfigured demo user are:
* User: `admin` * User: `demo`
* Password: `AdminDev` * Password: `AdminDev`
You can also login as regular user using any credential in the LDAP user directory at `./data/ldap/users.ldif`. You can also login as regular user using any credential in the LDAP user directory at `./data/ldap/users.ldif`.

View File

@ -0,0 +1,19 @@
FROM golang:1.17
RUN apt-get update
RUN apt-get -y install git
RUN git clone https://github.com/ClusterCockpit/cc-metric-store.git /cc-metric-store
RUN cd /cc-metric-store && go build
# Reactivate when latest commit is available
#RUN go get -d -v github.com/ClusterCockpit/cc-metric-store
#RUN go install -v github.com/ClusterCockpit/cc-metric-store@latest
RUN mv /cc-metric-store/cc-metric-store /go/bin
COPY config.json /go/bin
VOLUME /data
WORKDIR /go/bin
CMD ["./cc-metric-store"]

View File

@ -0,0 +1,28 @@
{
"metrics": {
"clock": { "frequency": 60, "aggregation": null, "scope": "node" },
"cpi": { "frequency": 60, "aggregation": null, "scope": "node" },
"cpu_load": { "frequency": 60, "aggregation": null, "scope": "node" },
"flops_any": { "frequency": 60, "aggregation": null, "scope": "node" },
"flops_dp": { "frequency": 60, "aggregation": null, "scope": "node" },
"flops_sp": { "frequency": 60, "aggregation": null, "scope": "node" },
"ib_bw": { "frequency": 60, "aggregation": null, "scope": "node" },
"lustre_bw": { "frequency": 60, "aggregation": null, "scope": "node" },
"mem_bw": { "frequency": 60, "aggregation": null, "scope": "node" },
"mem_used": { "frequency": 60, "aggregation": null, "scope": "node" },
"rapl_power": { "frequency": 60, "aggregation": null, "scope": "node" }
},
"checkpoints": {
"interval": 100000000000,
"directory": "/data/checkpoints",
"restore": 100000000000
},
"archive": {
"interval": 100000000000,
"directory": "/data/archive"
},
"retention-in-memory": 100000000000,
"http-api-address": "0.0.0.0:8081",
"nats": "nats://cc-nats:4222",
"jwt-public-key": "kzfYrYy+TzpanWZHJ5qSdMj5uKUWgq74BWhQG6copP0="
}

View File

File diff suppressed because one or more lines are too long

View File

@ -1,112 +0,0 @@
services:
db:
container_name: cc-db
image: mysql:8.0.22
command: ["--default-authentication-plugin=mysql_native_password"]
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- ${DATADIR}/sql-init:/docker-entrypoint-initdb.d
# - ${DATADIR}/sqldata:/var/lib/mysql
cap_add:
- SYS_NICE
influxdb:
container_name: cc-influxdb
image: influxdb
command: ["--reporting-disabled"]
environment:
DOCKER_INFLUXDB_INIT_MODE: setup
DOCKER_INFLUXDB_INIT_USERNAME: symfony
DOCKER_INFLUXDB_INIT_PASSWORD: ${INFLUXDB_PASSWORD}
DOCKER_INFLUXDB_INIT_ORG: ${INFLUXDB_ORG}
DOCKER_INFLUXDB_INIT_BUCKET: ${INFLUXDB_BUCKET}
DOCKER_INFLUXDB_INIT_RETENTION: 100w
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: ${INFLUXDB_ADMIN_TOKEN}
ports:
- "127.0.0.1:${INFLUXDB_PORT}:8086"
volumes:
- ${DATADIR}/influxdb/data:/var/lib/influxdb2
- ${DATADIR}/influxdb/config:/etc/influxdb2
openldap:
container_name: cc-ldap
image: osixia/openldap:1.5.0
command: --copy-service --loglevel debug
environment:
- LDAP_ADMIN_PASSWORD=${LDAP_ADMIN_PASSWORD}
- LDAP_ORGANISATION=${LDAP_ORGANISATION}
- LDAP_DOMAIN=${LDAP_DOMAIN}
volumes:
- ${DATADIR}/ldap:/container/service/slapd/assets/config/bootstrap/ldif/custom
redis:
container_name: cc-redis
image: redis
command: [
"redis-server",
"--save", "",
"--maxmemory", "2gb",
"--maxmemory-policy", "allkeys-lru"]
php:
container_name: cc-php
build:
context: ./php-fpm
args:
PHP_XDEBUG_INIT: ${PHP_XDEBUG_INIT}
PHP_XDEBUG_MODE: ${PHP_XDEBUG_MODE}
PHP_XDEBUG_CLIENT_PORT: ${PHP_XDEBUG_CLIENT_PORT}
PHP_XDEBUG_CLIENT_HOST: ${PHP_XDEBUG_CLIENT_HOST}
SYMFONY_CLI_VERSION: 4.23.2
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
LDAP_PASSWORD: ${LDAP_ADMIN_PASSWORD}
INFLUXDB_PASSWORD: ${INFLUXDB_PASSWORD}
INFLUXDB_PORT: ${INFLUXDB_PORT}
INFLUXDB_ADMIN_TOKEN: ${INFLUXDB_ADMIN_TOKEN}
INFLUXDB_ORG: ${INFLUXDB_ORG}
INFLUXDB_BUCKET: ${INFLUXDB_BUCKET}
INFLUXDB_SSL: ${INFLUXDB_SSL}
APP_ENVIRONMENT: ${APP_ENVIRONMENT}
environment:
- APP_CLUSTERCOCKPIT_INIT=${APP_CLUSTERCOCKPIT_INIT}
- CLUSTERCOCKPIT_BRANCH=${CLUSTERCOCKPIT_BRANCH}
- APP_JWT_PUB_KEY=${APP_JWT_PUB_KEY}
- APP_JWT_PRIV_KEY=${APP_JWT_PRIV_KEY}
volumes:
- ${DATADIR}/symfony:/var/www/symfony:cached
- ${DATADIR}/job-archive:/var/lib/job-archive:cached
depends_on:
- db
- redis
- influxdb
nginx:
container_name: cc-nginx
build:
context: ./nginx
ports:
- "127.0.0.1:${NGINX_PORT}:80"
depends_on:
- php
environment:
- NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx/conf.d
- NGINX_ENVSUBST_TEMPLATE_DIR=/etc/nginx/templates
- NGINX_ENVSUBST_TEMPLATE_SUFFIX=.template
volumes:
- ${DATADIR}/symfony:/var/www/symfony:cached
phpmyadmin:
container_name: cc-phpmyadmin
image: phpmyadmin
environment:
- PMA_HOST=cc-db
- PMA_USER=root
- PMA_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "127.0.0.1:${PHPMYADMIN_PORT}:80"

171
migrateTimestamps.pl Executable file
View File

@ -0,0 +1,171 @@
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use File::Path qw( make_path rmtree );
use Cpanel::JSON::XS qw( decode_json encode_json );
use File::Slurp;
use Data::Dumper;
use Time::Piece;
use Sort::Versions;
use REST::Client;
### JOB-ARCHIVE
my $localtime = localtime;
my $epochtime = $localtime->epoch;
my $archiveTarget = './cc-backend/var/job-archive';
my $archiveSrc = './data/job-archive-source';
my @ArchiveClusters;
# Get clusters by job-archive/$subfolder
opendir my $dh, $archiveSrc or die "can't open directory: $!";
while ( readdir $dh ) {
chomp; next if $_ eq '.' or $_ eq '..' or $_ eq 'job-archive';
my $cluster = $_;
push @ArchiveClusters, $cluster;
}
# start for jobarchive
foreach my $cluster ( @ArchiveClusters ) {
print "Starting to update start- and stoptimes in job-archive for $cluster\n";
opendir my $dhLevel1, "$archiveSrc/$cluster" or die "can't open directory: $!";
while ( readdir $dhLevel1 ) {
chomp; next if $_ eq '.' or $_ eq '..';
my $level1 = $_;
if ( -d "$archiveSrc/$cluster/$level1" ) {
opendir my $dhLevel2, "$archiveSrc/$cluster/$level1" or die "can't open directory: $!";
while ( readdir $dhLevel2 ) {
chomp; next if $_ eq '.' or $_ eq '..';
my $level2 = $_;
my $jobSource = "$archiveSrc/$cluster/$level1/$level2";
my $jobTarget = "$archiveTarget/$cluster/$level1/$level2/";
my $jobOrigin = $jobSource;
# check if files are directly accessible (old format) else get subfolders as file and update path
if ( ! -e "$jobSource/meta.json") {
my @folders = read_dir($jobSource);
if (!@folders) {
next;
}
# Only use first subfolder for now TODO
$jobSource = "$jobSource/".$folders[0];
}
# check if subfolder contains file, else remove source and skip
if ( ! -e "$jobSource/meta.json") {
# rmtree $jobOrigin;
next;
}
my $rawstr = read_file("$jobSource/meta.json");
my $json = decode_json($rawstr);
# NOTE Start meta.json iteration here
# my $random_number = int(rand(UPPERLIMIT)) + LOWERLIMIT;
# Set new startTime: Between 5 days and 1 day before now
# Remove id from attributes
$json->{startTime} = $epochtime - (int(rand(432000)) + 86400);
$json->{stopTime} = $json->{startTime} + $json->{duration};
# Add starttime subfolder to target path
$jobTarget .= $json->{startTime};
# target is not directory
if ( not -d $jobTarget ){
# print "Writing files\n";
# print "$cluster/$level1/$level2\n";
make_path($jobTarget);
my $outstr = encode_json($json);
write_file("$jobTarget/meta.json", $outstr);
my $datstr = read_file("$jobSource/data.json");
write_file("$jobTarget/data.json", $datstr);
} else {
# rmtree $jobSource;
}
}
}
}
}
print "Done for job-archive\n";
sleep(1);
## CHECKPOINTS
chomp(my $checkpointStart=`date --date 'TZ="Europe/Berlin" 0:00 7 days ago' +%s`);
my $halfday = 43200;
my $checkpTarget = './data/cc-metric-store/checkpoints';
my $checkpSource = './data/cc-metric-store-source/checkpoints';
my @CheckpClusters;
# Get clusters by cc-metric-store/$subfolder
opendir my $dhc, $checkpSource or die "can't open directory: $!";
while ( readdir $dhc ) {
chomp; next if $_ eq '.' or $_ eq '..' or $_ eq 'job-archive';
my $cluster = $_;
push @CheckpClusters, $cluster;
}
# start for checkpoints
foreach my $cluster ( @CheckpClusters ) {
print "Starting to update checkpoint filenames and data starttimes for $cluster\n";
opendir my $dhLevel1, "$checkpSource/$cluster" or die "can't open directory: $!";
while ( readdir $dhLevel1 ) {
chomp; next if $_ eq '.' or $_ eq '..';
# Nodename as level1-folder
my $level1 = $_;
if ( -d "$checkpSource/$cluster/$level1" ) {
my $nodeSource = "$checkpSource/$cluster/$level1/";
my $nodeTarget = "$checkpTarget/$cluster/$level1/";
my $nodeOrigin = $nodeSource;
my @files;
if ( -e "$nodeSource/1609459200.json") { # 1609459200 == First Checkpoint time in latest dump
@files = read_dir($nodeSource);
my $length = @files;
if (!@files || $length != 14) { # needs 14 files == 7 days worth of data
next;
}
} else {
# rmtree $nodeOrigin;
next;
}
my @sortedFiles = sort { versioncmp($a,$b) } @files; # sort alphanumerically: _Really_ start with index == 0 == 1609459200.json
if ( not -d $nodeTarget ){
# print "processing files for $level1 \n";
make_path($nodeTarget);
while (my ($index, $file) = each(@sortedFiles)) {
# print "$file\n";
my $rawstr = read_file("$nodeSource/$file");
my $json = decode_json($rawstr);
my $newTimestamp = $checkpointStart + ($index * $halfday);
# Get Diff from old Timestamp
my $timeDiff = $newTimestamp - $json->{from};
# Set new timestamp
$json->{from} = $newTimestamp;
foreach my $metric (keys %{$json->{metrics}}) {
$json->{metrics}->{$metric}->{start} += $timeDiff;
}
my $outstr = encode_json($json);
write_file("$nodeTarget/$newTimestamp.json", $outstr);
}
} else {
# rmtree $nodeSource;
}
}
}
}
print "Done for checkpoints\n";

View File

@ -1,8 +0,0 @@
FROM nginx:mainline-alpine
RUN mkdir -p /etc/nginx/templates
COPY templates/* /etc/nginx/templates/
COPY nginx.conf /etc/nginx/
CMD ["nginx"]
EXPOSE 80

View File

@ -1,48 +0,0 @@
user nginx;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_body_buffer_size 256k;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
client_body_temp_path /tmp 1 2;
client_body_in_file_only off;
keepalive_timeout 90;
send_timeout 120;
reset_timedout_connection on;
open_file_cache max=2000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
access_log off;
error_log off;
gzip on;
gzip_comp_level 9;
gzip_min_length 200;
gzip_types text/plain text/html text/css application/json;
include /etc/nginx/conf.d/*.conf;
}
daemon off;

View File

@ -1,23 +0,0 @@
server {
server_name localhost;
root /var/www/symfony/public;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param HTTPS off;
fastcgi_read_timeout 300;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
location ~ \.php$ {
return 404;
}
}

View File

@ -1,3 +0,0 @@
upstream php-upstream {
server php:9001;
}

View File

@ -1,99 +0,0 @@
FROM php:8.0-fpm
RUN apt-get update && apt-get install -y \
$PHPIZE_DEPS \
git \
wget \
zip \
gettext \
bash \
libldb-dev \
libldap-2.4-2 \
libldap-common \
libldap2-dev \
npm \
nodejs
RUN apt-get clean
RUN npm install --global yarn
RUN docker-php-ext-install ldap \
mysqli \
pdo_mysql \
opcache
# Enable php8-xdebug if $PHP_XDEBUG_INIT is true
ARG PHP_XDEBUG_INIT="false"
ARG PHP_XDEBUG_MODE=off
ARG PHP_XDEBUG_CLIENT_PORT=5902
ARG PHP_XDEBUG_CLIENT_HOST=host.docker.internal
COPY xdebug.ini /etc/php8/conf.d/xdebug.ini.template
COPY error_reporting.ini /usr/local/etc/php/conf.d/error_reporting.ini
RUN if [[ "$PHP_XDEBUG_INIT" == "true" ]]; then \
pecl install xdebug-3.0.4; \
docker-php-ext-enable xdebug; \
export PHP_XDEBUG_MODE=$PHP_XDEBUG_MODE; \
export PHP_XDEBUG_CLIENT_PORT=$PHP_XDEBUG_CLIENT_PORT; \
export PHP_XDEBUG_CLIENT_HOST=$PHP_XDEBUG_CLIENT_HOST; \
envsubst < /etc/php8/conf.d/xdebug.ini.template > /etc/php8/conf.d/xdebug.ini; \
cp /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini.back; \
cp /etc/php8/conf.d/xdebug.ini /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
rm -f /etc/php8/conf.d/xdebug.ini.template; \
fi
RUN curl -sS https://getcomposer.org/installer | tee composer-setup.php \
&& php composer-setup.php && rm composer-setup.php* \
&& chmod +x composer.phar && mv composer.phar /usr/bin/composer
ARG SYMFONY_CLI_VERSION
RUN wget https://github.com/symfony/cli/releases/download/v$SYMFONY_CLI_VERSION/symfony_linux_amd64.gz \
&& gzip -d symfony_linux_amd64.gz \
&& mv symfony_linux_amd64 symfony \
&& chmod +x symfony \
&& mv symfony /usr/local/bin/
RUN mkdir -p /var/lib/job-archive
RUN mkdir -p /var/www/symfony
VOLUME /var/www/symfony /var/lib/job-archive
COPY php.ini /usr/local/etc/php/
COPY symfony.ini /usr/local/etc/php/conf.d/
COPY symfony.ini /usr/local/etc/php/cli/conf.d/
COPY symfony.pool.conf /usr/local/etc/php/php-fpm.d/
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ARG APP_ENVIRONMENT
ENV APP_ENV=${APP_ENVIRONMENT}
ENV APP_SECRET=${APP_SECRET}
ENV APP_JWT_PUB_KEY="${APP_JWT_PUB_KEY}"
ENV APP_JWT_PRIV_KEY="${APP_JWT_PRIV_KEY}"
ENV APP_DEBUG=1
ENV REDIS_URL=redis://cc-redis
ENV LDAP_URL=ldap://cc-ldap
ARG INFLUXDB_PORT
ARG INFLUXDB_PASSWORD
ARG INFLUXDB_ADMIN_TOKEN
ARG INFLUXDB_ORG
ARG INFLUXDB_BUCKET
ARG INFLUXDB_SSL
ENV INFLUXDB_URL=http://cc-influxdb:${INFLUXDB_PORT}
ENV INFLUXDB_SSL=${INFLUXDB_SSL}
ENV INFLUXDB_TOKEN=${INFLUXDB_ADMIN_TOKEN}
ENV INFLUXDB_ORG=${INFLUXDB_ORG}
ENV INFLUXDB_BUCKET=${INFLUXDB_BUCKET}
ARG LDAP_PASSWORD
ENV LDAP_PW=${LDAP_PASSWORD}
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_DATABASE
ENV DATABASE_URL=mysql://${MYSQL_USER}:${MYSQL_PASSWORD}@cc-db:3306/${MYSQL_DATABASE}
ENV CORS_ALLOW_ORIGIN=^https?://(localhost|127\\.0\\.0\\.1)(:[0-9]+)?$
WORKDIR /var/www/symfony
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm", "-F", "-y/usr/local/etc/php/php-fpm.d/symfony.pool.conf"]
EXPOSE 9001

View File

@ -1,24 +0,0 @@
#!/usr/bin/env bash
if [ "$APP_CLUSTERCOCKPIT_INIT" = true ]; then
rm -rf /var/www/symfony/* /var/www/symfony/.??*
git clone -b $CLUSTERCOCKPIT_BRANCH https://github.com/ClusterCockpit/ClusterCockpit .
if [ "$APP_ENV" = dev ]; then
composer install --no-progress --optimize-autoloader
yarn install
yarn encore dev
else
composer install --no-dev --no-progress --optimize-autoloader
yarn install
yarn encore production
fi
ln -s /var/lib/job-archive var/job-archive
chown -R www-data:www-data /var/www/symfony/* /var/www/symfony/.??*
fi
# Reports php environment on container startup
php bin/console about
exec "$@"

View File

@ -1 +0,0 @@
error_reporting=E_ALL

File diff suppressed because it is too large Load Diff

View File

@ -1 +0,0 @@
date.timezone = UTC

View File

@ -1,100 +0,0 @@
; Start a new pool named 'symfony'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('symfony' here)
[symfony]
; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
; will be used.
user = www-data
group = www-data
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses on a
; specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = 0.0.0.0:9001
; Choose how the process manager will control the number of child processes.
; Possible Values:
; static - a fixed number (pm.max_children) of child processes;
; dynamic - the number of child processes are set dynamically based on the
; following directives. With this process management, there will be
; always at least 1 children.
; pm.max_children - the maximum number of children that can
; be alive at the same time.
; pm.start_servers - the number of children created on startup.
; pm.min_spare_servers - the minimum number of children in 'idle'
; state (waiting to process). If the number
; of 'idle' processes is less than this
; number then some children will be created.
; pm.max_spare_servers - the maximum number of children in 'idle'
; state (waiting to process). If the number
; of 'idle' processes is greater than this
; number then some children will be killed.
; ondemand - no children are created at startup. Children will be forked when
; new requests will connect. The following parameter are used:
; pm.max_children - the maximum number of children that
; can be alive at the same time.
; pm.process_idle_timeout - The number of seconds after which
; an idle process will be killed.
; Note: This value is mandatory.
pm = dynamic
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 20
; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 2
; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 1
; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 3
;---------------------
; Make specific Docker environment variables available to PHP
env[APP_ENV] = $APP_ENV
env[APP_SECRET] = $APP_SECRET
env[APP_JWT_PUB_KEY] = $APP_JWT_PUB_KEY
env[APP_JWT_PRIV_KEY] = $APP_JWT_PRIV_KEY
env[APP_DEBUG] = $APP_DEBUG
env[INFLUXDB_URL] = $INFLUXDB_URL
env[INFLUXDB_TOKEN] = $INFLUXDB_TOKEN
env[INFLUXDB_ORG] = $INFLUXDB_ORG
env[INFLUXDB_BUCKET] = $INFLUXDB_BUCKET
env[INFLUXDB_SSL] = $INFLUXDB_SSL
env[DATABASE_URL] = $DATABASE_URL
env[REDIS_URL] = $REDIS_URL
env[LDAP_URL] = $LDAP_URL
env[LDAP_PW] = $LDAP_PW
env[CORS_ALLOW_ORIGIN] = $CORS_ALLOW_ORIGIN
; Catch worker output
catch_workers_output = yes
; Increase PHP memory limit (Default: 128M)
; Note: Required for loading large jobs from InfluxDB (>16 Nodes && >12h Duration)
php_admin_value[memory_limit] = 1024M

View File

@ -1,7 +0,0 @@
zend_extension=xdebug.so
[Xdebug]
xdebug.mode=${PHP_XDEBUG_MODE}
xdebug.client_port=${PHP_XDEBUG_CLIENT_PORT}
xdebug.client_host=${PHP_XDEBUG_CLIENT_HOST}
xdebug.start_with_request=yes

View File

@ -0,0 +1,100 @@
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use File::Path qw( make_path rmtree );
use Cpanel::JSON::XS qw( decode_json encode_json );
use File::Slurp;
use Data::Dumper;
use Time::Piece;
use Sort::Versions;
use REST::Client;
### INFLUXDB
my $newCheckpoints = './data/cc-metric-store/checkpoints';
my @CheckpClusters;
my $verbose = 1;
my $restClient = REST::Client->new();
$restClient->setHost('http://localhost:8086'); # Adapt port here!
$restClient->addHeader('Authorization', "Token 74008ea2a8dad5e6f856838a90c6392e"); # compare .env file
$restClient->addHeader('Content-Type', 'text/plain; charset=utf-8');
$restClient->addHeader('Accept', 'application/json');
$restClient->getUseragent()->ssl_opts(SSL_verify_mode => 0); # Temporary: Disable Cert Check
$restClient->getUseragent()->ssl_opts(verify_hostname => 0); # Temporary: Disable Cert Check
# Get clusters by cc-metric-store/$subfolder
opendir my $dhc, $newCheckpoints or die "can't open directory: $!";
while ( readdir $dhc ) {
chomp; next if $_ eq '.' or $_ eq '..' or $_ eq 'job-archive';
my $cluster = $_;
push @CheckpClusters, $cluster;
}
# start to read checkpoints for influx
foreach my $cluster ( @CheckpClusters ) {
print "Starting to read updated checkpoint-files into influx for $cluster\n";
opendir my $dhLevel1, "$newCheckpoints/$cluster" or die "can't open directory: $!";
while ( readdir $dhLevel1 ) {
chomp; next if $_ eq '.' or $_ eq '..';
my $level1 = $_;
if ( -d "$newCheckpoints/$cluster/$level1" ) {
my $nodeSource = "$newCheckpoints/$cluster/$level1/";
my @files = read_dir($nodeSource);
my $length = @files;
if (!@files || $length != 14) { # needs 14 files == 7 days worth of data
next;
}
my @sortedFiles = sort { versioncmp($a,$b) } @files; # sort alphanumerically: _Really_ start with index == 0 == 1609459200.json
my $nodeMeasurement;
foreach my $file (@sortedFiles) {
# print "$file\n";
my $rawstr = read_file("$nodeSource/$file");
my $json = decode_json($rawstr);
my $fileMeasurement;
foreach my $metric (keys %{$json->{metrics}}) {
my $start = $json->{metrics}->{$metric}->{start};
my $timestep = $json->{metrics}->{$metric}->{frequency};
my $data = $json->{metrics}->{$metric}->{data};
my $length = @$data;
my $measurement;
while (my ($index, $value) = each(@$data)) {
if ($value) {
my $timestamp = $start + ($timestep * $index);
$measurement .= "$metric,cluster=$cluster,hostname=$level1,type=node value=".$value." $timestamp"."\n";
}
}
# Use v2 API for Influx2
if ($measurement) {
# print "Adding: #VALUES $length KEY $metric"."\n";
$fileMeasurement .= $measurement;
}
}
if ($fileMeasurement) {
$nodeMeasurement .= $fileMeasurement;
}
}
$restClient->POST("/api/v2/write?org=ClusterCockpit&bucket=ClusterCockpit&precision=s", "$nodeMeasurement"); # compare .env for bucket and org
my $responseCode = $restClient->responseCode();
if ( $responseCode eq '204') {
if ( $verbose ) {
print "INFLUX API WRITE: CLUSTER $cluster HOST $level1"."\n";
};
} else {
if ( $responseCode ne '422' ) { # Exclude High Frequency Error 422 - Temporary!
my $response = $restClient->responseContent();
print "INFLUX API WRITE ERROR CODE ".$responseCode.": ".$response."\n";
};
};
}
}
}
print "Done for influx\n";

View File

@ -0,0 +1,12 @@
#!/bin/bash
echo "Will run prerequisites 'apt install python3-pip' and 'pip install sqlite3-to-mysql'"
sudo apt install python3-pip
pip install sqlite3-to-mysql
echo "'sqlite3mysql' requires running DB container, will fail otherwise."
# -f FILE -d DBNAME -u USER -h HOST -P PORT
~/.local/bin/sqlite3mysql -f job.db -d ClusterCockpit -u root --mysql-password root -h localhost -P 3306

82
setupDev.sh Executable file
View File

@ -0,0 +1,82 @@
#!/bin/bash
# Check cc-backend, touch job.db if exists
if [ ! -d cc-backend ]; then
echo "'cc-backend' not yet prepared! Please clone cc-backend repository before starting this script."
echo -n "Stopped."
exit
else
cd cc-backend
if [ ! -d var ]; then
mkdir var
touch var/job.db
else
echo "'cc-backend/var' exists. Cautiously exiting."
echo -n "Stopped."
exit
fi
fi
# Download unedited job-archive to ./data/job-archive-source
if [ ! -d data/job-archive-source ]; then
cd data
wget https://hpc-mover.rrze.uni-erlangen.de/HPC-Data/0x7b58aefb/eig7ahyo6fo2bais0ephuf2aitohv1ai/job-archive.tar.xz
tar xJf job-archive.tar.xz
mv ./job-archive ./job-archive-source
rm ./job-archive.tar.xz
cd ..
else
echo "'data/job-archive-source' already exists!"
fi
# Download unedited checkpoint files to ./data/cc-metric-store-source/checkpoints
if [ ! -d data/cc-metric-store-source ]; then
mkdir -p data/cc-metric-store-source/checkpoints
cd data/cc-metric-store-source/checkpoints
wget https://hpc-mover.rrze.uni-erlangen.de/HPC-Data/0x7b58aefb/eig7ahyo6fo2bais0ephuf2aitohv1ai/cc-metric-store-checkpoints.tar.xz
tar xf cc-metric-store-checkpoints.tar.xz
rm cc-metric-store-checkpoints.tar.xz
cd ../../../
else
echo "'data/cc-metric-store-source' already exists!"
fi
# Update timestamps
perl ./migrateTimestamps.pl
# Create archive folder for rewrtitten ccms checkpoints
if [ ! -d data/cc-metric-store/archive ]; then
mkdir -p data/cc-metric-store/archive
fi
# cleanup sources
# rm -r ./data/job-archive-source
# rm -r ./data/cc-metric-store-source
# prepare folders for influxdb2
if [ ! -d data/influxdb ]; then
mkdir -p data/influxdb/data
mkdir -p data/influxdb/config/influx-configs
else
echo "'data/influxdb' already exists!"
fi
# Check dotenv-file and docker-compose-yml, copy accordingly if not present and build docker services
if [ ! -d .env ]; then
cp templates/env.default ./.env
fi
if [ ! -d docker-compose.yml ]; then
cp templates/docker-compose.yml.default ./docker-compose.yml
fi
docker-compose build
./cc-backend/cc-backend --init-db --add-user demo:admin:AdminDev
docker-compose up -d
echo ""
echo "Setup complete, containers are up by default: Shut down with 'docker-compose down'."
echo "Use './cc-backend/cc-backend' to start cc-backend."
echo "Use scripts in /scripts to load data into influx or mariadb."
# ./cc-backend/cc-backend

View File

@ -0,0 +1,75 @@
services:
nats:
container_name: cc-nats
image: nats:alpine
ports:
- "4222:4222"
- "8222:8222"
cc-metric-store:
container_name: cc-metric-store
build:
context: ./cc-metric-store
ports:
- "8084:8084"
volumes:
- ${DATADIR}/cc-metric-store:/data
depends_on:
- nats
influxdb:
container_name: cc-influxdb
image: influxdb
command: ["--reporting-disabled"]
environment:
DOCKER_INFLUXDB_INIT_MODE: setup
DOCKER_INFLUXDB_INIT_USERNAME: devel
DOCKER_INFLUXDB_INIT_PASSWORD: ${INFLUXDB_PASSWORD}
DOCKER_INFLUXDB_INIT_ORG: ${INFLUXDB_ORG}
DOCKER_INFLUXDB_INIT_BUCKET: ${INFLUXDB_BUCKET}
DOCKER_INFLUXDB_INIT_RETENTION: 100w
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: ${INFLUXDB_ADMIN_TOKEN}
ports:
- "127.0.0.1:${INFLUXDB_PORT}:8086"
volumes:
- ${DATADIR}/influxdb/data:/var/lib/influxdb2
- ${DATADIR}/influxdb/config:/etc/influxdb2
openldap:
container_name: cc-ldap
image: osixia/openldap:1.5.0
command: --copy-service --loglevel debug
environment:
- LDAP_ADMIN_PASSWORD=${LDAP_ADMIN_PASSWORD}
- LDAP_ORGANISATION=${LDAP_ORGANISATION}
- LDAP_DOMAIN=${LDAP_DOMAIN}
volumes:
- ${DATADIR}/ldap:/container/service/slapd/assets/config/bootstrap/ldif/custom
db:
container_name: cc-db
image: mariadb:latest
command: ["--default-authentication-plugin=mysql_native_password"]
environment:
MARIADB_ROOT_PASSWORD: ${MARIADB_ROOT_PASSWORD}
MARIADB_DATABASE: ${MARIADB_DATABASE}
MARIADB_USER: ${MARIADB_USER}
MARIADB_PASSWORD: ${MARIADB_PASSWORD}
ports:
- "127.0.0.1:${MARIADB_PORT}:3306"
# volumes:
# - ${DATADIR}/sql-init:/docker-entrypoint-initdb.d
cap_add:
- SYS_NICE
phpmyadmin:
container_name: cc-phpmyadmin
image: phpmyadmin
environment:
- PMA_HOST=cc-db
- PMA_USER=root
- PMA_PASSWORD=${MARIADB_ROOT_PASSWORD}
ports:
- "127.0.0.1:${PHPMYADMIN_PORT}:80"
depends_on:
- db

View File

@ -0,0 +1,29 @@
services:
db:
container_name: cc-db
image: mysql:8.0.22
command: ["--default-authentication-plugin=mysql_native_password"]
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
ports:
- "127.0.0.1:${MYSQL_PORT}:3306"
# volumes:
# - ${DATADIR}/sql-init:/docker-entrypoint-initdb.d
# - ${DATADIR}/sqldata:/var/lib/mysql
cap_add:
- SYS_NICE
phpmyadmin:
container_name: cc-phpmyadmin
image: phpmyadmin
environment:
- PMA_HOST=cc-db
- PMA_USER=root
- PMA_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "127.0.0.1:${PHPMYADMIN_PORT}:80"
depends_on:
- db

40
templates/env.default Normal file
View File

@ -0,0 +1,40 @@
########################################################################
# CCBACKEND DEVEL DOCKER SETTINGS
########################################################################
########################################################################
# INFLUXDB
########################################################################
INFLUXDB_PORT=8086
INFLUXDB_PASSWORD=1bc8777daad29d2f05eb77b7571fd8a1
INFLUXDB_ADMIN_TOKEN=74008ea2a8dad5e6f856838a90c6392e
INFLUXDB_ORG=ClusterCockpit
INFLUXDB_BUCKET=ClusterCockpit
# Whether or not to check SSL Cert in Symfony Client, Default: false
INFLUXDB_SSL=false
########################################################################
# MARIADB
########################################################################
MARIADB_ROOT_PASSWORD=root
MARIADB_DATABASE=ClusterCockpit
MARIADB_USER=clustercockpit
MARIADB_PASSWORD=clustercockpit
MARIADB_PORT=3306
#########################################
# LDAP
########################################################################
LDAP_ADMIN_PASSWORD=mashup
LDAP_ORGANISATION=NHR@FAU
LDAP_DOMAIN=rrze.uni-erlangen.de
########################################################################
# PHPMyAdmin
########################################################################
PHPMYADMIN_PORT=8081
########################################################################
# INTERNAL SETTINGS
########################################################################
DATADIR=./data

17
templates/env.mysql Normal file
View File

@ -0,0 +1,17 @@
########################################################################
# ADDITIONAL ENV VARIABLES FOR MYSQL AND PHPMYADMIN CONTAINERS
########################################################################
########################################################################
# MySQL
########################################################################
MYSQL_ROOT_PASSWORD=root
MYSQL_DATABASE=ClusterCockpit
MYSQL_USER=clustercockpit
MYSQL_PASSWORD=clustercockpit
MYSQL_PORT=3306
########################################################################
# PHPMyAdmin
########################################################################
PHPMYADMIN_PORT=8081