Merge branch 'main' into dev

This commit is contained in:
adityauj
2025-07-04 12:34:51 +02:00
committed by GitHub
7 changed files with 188 additions and 68 deletions

124
README.md
View File

@@ -1,9 +1,12 @@
# cc-docker
This is a `docker-compose` setup which provides a quickly started environment for ClusterCockpit development and testing, using `cc-backend`.
A number of services is readily available as docker container (nats, cc-metric-store, InfluxDB, LDAP, SLURM), or easily added by manual configuration (MariaDB).
This is a `docker-compose` setup which provides a quickly started environment
for ClusterCockpit development and testing, using `cc-backend`. A number of
services is readily available as docker container (nats, cc-metric-store,
InfluxDB, LDAP, SLURM), or easily added by manual configuration (MariaDB).
It includes the following containers:
|Service full name|docker service name|port|
| --- | --- | --- |
|Slurm Controller service|slurmctld|6818|
@@ -14,11 +17,14 @@ It includes the following containers:
|cc-metric-store service|cc-metric-store|8084|
|OpenLDAP|openldap|389, 636|
dev
The setup comes with fixture data for a Job archive, cc-metric-store checkpoints, and a LDAP user directory.
## Prerequisites
For all the docker services to work correctly, you will need the following tools installed:
For all the docker services to work correctly, you will need the following tools
installed:
1. `docker` and `docker-compose`
2. `golang` (for compiling cc-metric-store)
@@ -26,7 +32,9 @@ For all the docker services to work correctly, you will need the following tools
4. `npm` (for cc-backend)
5. `make` (for building slurm base image)
It is also recommended to add docker service to sudouser group since the setupDev.sh script assumes sudo permissions for docker and docker-compose services.
It is also recommended to add docker service to sudo user group since the
setupDev.sh script assumes sudo permissions for docker and docker-compose
services.
You can use:
@@ -38,9 +46,11 @@ sudo usermod -aG docker $USER
sudo shutdown -r -t 0
```
Note: You can install all these dependencies via predefined installation steps in `prerequisite_installation_script.sh`.
Note: You can install all these dependencies via predefined installation steps
in `prerequisite_installation_script.sh`.
If you are using different linux flavors, you will have to adapt `prerequisite_installation_script.sh` as well as `setupDev.sh`.
If you are using different linux flavors, you will have to adapt
`prerequisite_installation_script.sh` as well as `setupDev.sh`.
## Setup Procedure
@@ -48,23 +58,37 @@ If you are using different linux flavors, you will have to adapt `prerequisite_i
2. Run the setup bash file: `$> ./setupDev.sh`: **NOTICE** The script will download files of a total size of 338MB (mostly for the cc-metric-store data).
3. The setup-script launches the supporting container stack in the background automatically if everything went well. Run `$> ./cc-backend/cc-backend -server -dev` to start `cc-backend`.
3. The setup-script launches the supporting container stack in the background
automatically if everything went well. Run
4. By default, you can access `cc-backend` in your browser at `http://localhost:8080`. You can shut down the cc-backend server by pressing `CTRL-C`, remember to also shut down all containers via `$> docker-compose down` afterwards.
``` bash
./cc-backend/cc-backend -server -dev
```
to start `cc-backend`.
4. By default, you can access `cc-backend` in your browser at
`http://localhost:8080`. You can shut down the cc-backend server by pressing
`CTRL-C`, remember to also shut down all containers via `$> docker-compose down`
afterwards.
5. You can restart the containers with: `$> docker-compose up -d`.
## Credentials for logging into clustercockpit
Credentials for the preconfigured demo user are:
* User: `demo`
* Password: `demo`
Credentials for the preconfigured LDAP user are:
* User: `ldapuser`
* Password: `ldapuser`
You can also login as regular user using any credential in the LDAP user directory at `./data/ldap/users.ldif`.
You can also login as regular user using any credential in the LDAP user
directory at `./data/ldap/users.ldif`.
## Preconfigured setup between docker services and ClusterCockpit components
@@ -74,23 +98,30 @@ The preconfigured config.json attaches to:
#### 2. cc-metric-store docker service on port 8084
#### 3. cc-slurm-adapter is running on slurmctld docker service.
cc-metric-store also has a preconfigured `config.json` in `cc-metric-store/config.json` which attaches to NATS docker service on port 4222 and subscribes to topic 'hpc-nats'.
Basically, all the ClusterCockpit components and the docker services attach to each other like lego pieces.
cc-metric-store also has a preconfigured `config.json` in
`cc-metric-store/config.json` which attaches to NATS docker service on port 4222
and subscribes to topic 'hpc-nats'.
Basically, all the ClusterCockpit components and the docker services attach to
each other like lego pieces.
## Docker commands to access the services
> Note: You need to be in cc-docker directory in order to execute any docker command
You can view all docker processes running on either of the VM instance by using this command:
You can view all docker processes running on either of the VM instance by using
this command:
```
$ docker ps
docker ps
```
Now that you can see the docker services, and if you want to manually access the docker services, you have to run **`bash`** command in those running services.
Now that you can see the docker services, and if you want to manually access the
docker services, you have to run **`bash`** command in those running services.
> **`Example`**: You want to run slurm commands like `sinfo` or `squeue` or `scontrol` on slurm controller, you cannot directly access it.
> **`Example`**: You want to run slurm commands like `sinfo` or `squeue` or
> `scontrol` on slurm controller, you cannot directly access it.
You need to open a **`bash`** session in the running service by using the following command:
@@ -104,43 +135,58 @@ $ docker exec -it slurmctld bash
$ docker exec -it cc-metric-store bash
```
Once you start a **`bash`** on any docker service, then you may execute any service related commands in that **`bash`**.
Once you start a **`bash`** on any docker service, then you may execute any
service related commands in that **`bash`**.
But for Cluster Cockpit development, you only need ports to access these docker services. You have to use `localhost:<port>` when trying to access any docker service. You may need to configure the `cc-backend/config.json` based on these docker services and ports.
But for Cluster Cockpit development, you only need ports to access these docker
services. You have to use `localhost:<port>` when trying to access any docker
service. You may need to configure the `cc-backend/config.json` based on these
docker services and ports.
## Slurm setup in cc-docker
### 1. Slurm controller
Currently slurm controller is aware of the 1 node that we have setup in our mini cluster i.e. node01.
Currently slurm controller is aware of the 1 node that we have setup in our mini
cluster i.e. node01.
In order to execute slurm commands, you may need to **`bash`** into the **`slurmctld`** docker service.
In order to execute slurm commands, you may need to **`bash`** into the
**`slurmctld`** docker service.
```
$ docker exec -it slurmctld bash
``` bash
docker exec -it slurmctld bash
```
Then you may be able to run slurm controller commands. A few examples without output are:
Then you may be able to run slurm controller commands. A few examples without
output are:
``` bash
sinfo
```
$ sinfo
or
$ squeue
``` bash
squeue
```
or
or
$ scontrol show nodes
``` bash
scontrol show nodes
```
### 2. Slurm rest service
You do not need to **`bash`** into the slurmrestd service but can directly access the rest API via localhost:6820. A simple example on how to CURL to the slurm rest API is given in the `curl_slurmrestd.sh`.
You do not need to **`bash`** into the slurmrestd service but can directly
access the rest API via localhost:6820. A simple example on how to CURL to the
slurm rest API is given in the `curl_slurmrestd.sh`.
You can directly use `curl_slurmrestd.sh` with a never expiring JWT token ( can be found in /data/slurm/secret/jwt_token.txt )
You can directly use `curl_slurmrestd.sh` with a never expiring JWT token ( can
be found in /data/slurm/secret/jwt_token.txt )
You may also use the never expiring token directly from the file for any of your custom CURL commands.
You may also use the never expiring token directly from the file for any of your
custom CURL commands.
## Known Issues
@@ -148,26 +194,28 @@ You may also use the never expiring token directly from the file for any of your
* You need to ensure that no other web server is running on ports 8080 (cc-backend), 8084 (cc-metric-store), 4222 and 8222 (Nats). If one or more ports are already in use, you have to adapt the related config accordingly.
* Existing VPN connections sometimes cause problems with docker. If `docker-compose` does not start up correctly, try disabling any active VPN connection. Refer to https://stackoverflow.com/questions/45692255/how-make-openvpn-work-with-docker for further information.
## Docker services and restarting the services
You can find all the docker services in `docker-compose.yml`. Feel free to modify it.
You can find all the docker services in `docker-compose.yml`. Feel free to
modify it.
Whenever you modify it, please use
```
$ docker compose down
``` bash
docker compose down
```
in order to shut down all the services in all the VMs (maininstance, nodeinstance, nodeinstance2) and then start all the services by using
in order to shut down all the services in all the VMs (maininstance,
nodeinstance, nodeinstance2) and then start all the services by using
``` bash
docker compose up
```
$ docker compose up
```
TODO: Update job archive and all other metric data.
The job archive with 1867 jobs originates from the second half of 2020.
Roughly 2700 jobs from the first week of 2021 are loaded with data from InfluxDB.
Some views of ClusterCockpit (e.g. the Users view) show the last week or month.
To show some data there you have to set the filter to time periods with jobs (August 2020 to January 2021).
To show some data there you have to set the filter to time periods with jobs
(August 2020 to January 2021).

View File

@@ -38,6 +38,27 @@ services:
volumes:
- ${DATADIR}/ldap:/container/service/slapd/assets/config/bootstrap/ldif/custom
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
keycloak:
container_name: keycloak
build:
context: ./keycloak
args:
PG_KC_URL: postgres
PG_KC_USER: keycloak
PG_KC_PASS: password
ports:
- "0.0.0.0:8080:8080"
restart: always
command: --verbose start --optimized
mariadb:
container_name: mariadb
image: mariadb:latest
@@ -127,4 +148,5 @@ services:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "6820:6820"
- "6820:6820"

32
keycloak/Dockerfile Normal file
View File

@@ -0,0 +1,32 @@
FROM quay.io/keycloak/keycloak:latest as builder
# Enable health and metrics support
ENV KC_METRICS_ENABLED=true
ENV KC_HEALTH_ENABLED=true
# Configure a database vendor
ENV KC_DB=postgres
WORKDIR /opt/keycloak
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
# ENV KC_DB_URL_HOST=${PG_KC_URL}
# ENV KC_DB_USERNAME=${PG_KC_USER}
# ENV KC_DB_PASSWORD=${PG_KC_PASS}
# ENV KEYCLOAK_ADMIN_PASSWORD=${KC_ADMIN_PASS}
ENV KC_DB_URL_HOST=postgres
ENV KC_DB_URL_PORT=5432
ENV KC_DB_URL_DATABASE=keycloak
ENV KC_DB_USERNAME=keycloak
ENV KC_DB_PASSWORD=password
ENV KEYCLOAK_ADMIN_PASSWORD=admin
ENV KC_PROXY=edge
ENV KC_HOSTNAME=
ENV KC_HOSTNAME_STRICT=false
ENV KC_HOSTNAME_STRICT_BACKCHANNEL=false
ENV KC_HTTP_ENABLED=true
ENV KC_PROXY=edge
ENV KEYCLOAK_ADMIN=admin
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]

View File

@@ -31,15 +31,6 @@ else
echo "Docker installed."
fi
# check if docker-compose is installed and available
if ! docker-compose --version; then
echo "Docker-compose not installed!"
echo -n "Stopped."
exit
else
echo "Docker-compose installed."
fi
# check if npm is installed and available
if ! npm --version; then
echo "NPM not installed!"

View File

@@ -16,17 +16,24 @@ while (<IN>) {
}
}
close IN;
my $fail = 0;
for my $code (@modules) {
my ( undef, $library ) = split( / /, $code ); # get the module name
$library =~ s/;//; # clean up the name
eval $code;
if ($@) {
warn "couldn't load $library: $@", "\n";
} else {
print "$library looks ok\n";
$fail = 1;
}
}
if ($fail) {
exit 0;
} else {
exit 1;
}
sub help
{
print <<"END";

View File

@@ -27,6 +27,26 @@ fi
chmod u+x scripts/checkModules.sh
./scripts/checkModules.sh
# check if docker-compose is installed and available
if ! docker-compose --version; then
echo "Docker-compose not installed!"
else
echo "docker-compose available."
export DOCKER_COMPOSE="docker-compose"
fi
if ! docker compose version; then
echo "Docker-compose not installed!"
else
echo "docker compose available."
export DOCKER_COMPOSE="docker compose"
fi
if [[ -z "${DOCKER_COMPOSE}" ]]; then
echo -n "Stopped."
exit
fi
# Creates data directory if it does not exists.
# Contains all the mount points required by all the docker services
# and their static files.
@@ -79,8 +99,8 @@ if [ -d data/cc-metric-store-source ]; then
fi
# Just in case user forgot manually shutdown the docker services.
docker-compose down
docker-compose down --remove-orphans
$DOCKER_COMPOSE down
$DOCKER_COMPOSE down --remove-orphans
# This automatically builds the base docker image for slurm.
# All the slurm docker service in docker-compose.yml refer to
@@ -90,8 +110,8 @@ make
cd ../..
# Starts all the docker services from docker-compose.yml.
docker-compose build
docker-compose up -d
$DOCKER_COMPOSE build
$DOCKER_COMPOSE up -d
echo ""

View File

@@ -5,7 +5,7 @@ ENV SLURM_VERSION=24.05.3
ENV HTTP_PARSER_VERSION=2.8.0
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
RUN ARCH=$(uname -m) && yum install -y https://rpmfind.net/linux/almalinux/8.10/PowerTools/x86_64/os/Packages/http-parser-devel-2.8.0-9.el8.$ARCH.rpm
RUN ARCH=$(uname -m) && yum install -y https://rpmfind.net/linux/almalinux/8.10/PowerTools/$ARCH/os/Packages/http-parser-devel-2.8.0-9.el8.$ARCH.rpm
RUN groupadd -g 981 munge \
&& useradd -m -c "MUNGE Uid 'N' Gid Emporium" -d /var/lib/munge -u 981 -g munge -s /sbin/nologin munge \
@@ -15,25 +15,25 @@ RUN groupadd -g 981 munge \
&& useradd -m -c "Workflow user" -d /home/worker -u 982 -g worker -s /bin/bash worker
RUN yum install -y munge munge-libs rng-tools \
python3 gcc openssl openssl-devel \
openssh-server openssh-clients dbus-devel \
pam-devel numactl numactl-devel hwloc sudo \
lua readline-devel ncurses-devel man2html \
autoconf automake json-c-devel libjwt-devel \
libibmad libibumad rpm-build perl-ExtUtils-MakeMaker.noarch rpm-build make wget
python3 gcc openssl openssl-devel \
openssh-server openssh-clients dbus-devel \
pam-devel numactl numactl-devel hwloc sudo \
lua readline-devel ncurses-devel man2html \
autoconf automake json-c-devel libjwt-devel \
libibmad libibumad rpm-build perl-ExtUtils-MakeMaker.noarch rpm-build make wget
RUN dnf --enablerepo=powertools install -y munge-devel rrdtool-devel lua-devel hwloc-devel mariadb-server mariadb-devel
RUN mkdir -p /usr/local/slurm-tmp \
&& cd /usr/local/slurm-tmp \
&& wget https://download.schedmd.com/slurm/slurm-${SLURM_VERSION}.tar.bz2 \
&& rpmbuild -ta --with slurmrestd --with jwt slurm-${SLURM_VERSION}.tar.bz2
&& cd /usr/local/slurm-tmp \
&& wget https://download.schedmd.com/slurm/slurm-${SLURM_VERSION}.tar.bz2 \
&& rpmbuild -ta --with slurmrestd --with jwt slurm-${SLURM_VERSION}.tar.bz2
RUN ARCH=$(uname -m) \
&& yum -y --nogpgcheck localinstall \
/root/rpmbuild/RPMS/$ARCH/slurm-${SLURM_VERSION}*.$ARCH.rpm \
/root/rpmbuild/RPMS/$ARCH/slurm-perlapi-${SLURM_VERSION}*.$ARCH.rpm \
/root/rpmbuild/RPMS/$ARCH/slurm-slurmctld-${SLURM_VERSION}*.$ARCH.rpm
&& yum -y --nogpgcheck localinstall \
/root/rpmbuild/RPMS/$ARCH/slurm-${SLURM_VERSION}*.$ARCH.rpm \
/root/rpmbuild/RPMS/$ARCH/slurm-perlapi-${SLURM_VERSION}*.$ARCH.rpm \
/root/rpmbuild/RPMS/$ARCH/slurm-slurmctld-${SLURM_VERSION}*.$ARCH.rpm
VOLUME ["/home", "/.secret"]
# 22: SSH