The SLURM dispatcher can run on any node that can submit requests to both the Arvados API server and the SLURM controller. It is not resource-intensive, so you can run it on the API server node.
First, add the appropriate package repository for your distribution.
On Red Hat-based systems:
~$ sudo yum install crunch-dispatch-slurm ~$ sudo systemctl enable crunch-dispatch-slurm
On Debian-based systems:
~$ sudo apt-get install crunch-dispatch-slurm
Create an Arvados superuser token for use by the dispatcher. If you have multiple dispatch processes, you should give each one a different token.
On the API server, use the following commands:
~$ cd /var/www/arvados-api/current $ sudo -u webserver-user RAILS_ENV=production bundle exec script/create_superuser_token.rb zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Set up crunch-dispatch-slurm’s configuration directory:
~$ sudo mkdir -p /etc/arvados ~$ sudo install -d -o root -g crunch -m 0750 /etc/arvados/crunch-dispatch-slurm
/etc/arvados/crunch-dispatch-slurm/crunch-dispatch-slurm.yml to authenticate to your Arvados API server, using the token you generated in the previous step. Follow this YAML format:
Client: APIHost: zzzzz.arvadosapi.com AuthToken: zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
This is the only configuration required by crunch-dispatch-slurm. The subsections below describe optional configuration flags you can set inside the main configuration object.
Override Keep service discovery with a predefined list of Keep URIs. This can be useful if the compute nodes run a local keepstore that should handle all Keep traffic. Example:
Client: APIHost: zzzzz.arvadosapi.com AuthToken: zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz KeepServiceURIs: - http://127.0.0.1:25107
crunch-dispatch-slurm polls the API server periodically for new containers to run. The
PollPeriod option controls how often this poll happens. Set this to a string of numbers suffixed with one of the time units
h. For example:
crunch-dispatch-slurm adjusts the “nice” values of its SLURM jobs to ensure containers are prioritized correctly relative to one another. This option tunes the adjustment mechanism.
PrioritySpreadcan help avoid reaching that limit.
The smallest usable value is
1. The default value of
10 is used if this option is zero or negative. Example:
When crunch-dispatch-slurm invokes
sbatch, you can add arguments to the command by specifying
SbatchArguments. You can use this to send the jobs to specific cluster partitions or add resource requests. Set
SbatchArguments to an array of strings. For example:
SbatchArguments: - "--partition=PartitionName"
Note: If an argument is supplied multiple times,
slurm uses the value of the last occurrence of the argument on the command line. Arguments specified through Arvados are added after the arguments listed in SbatchArguments. This means, for example, an Arvados container with that specifies
scheduling_parameter will override an occurrence of
--partition in SbatchArguments. As a result, for container parameters that can be specified through Arvados, SbatchArguments can be used to specify defaults but not enforce specific policy.
If your SLURM cluster uses the
task/cgroup TaskPlugin, you can configure Crunch’s Docker containers to be dispatched inside SLURM’s cgroups. This provides consistent enforcement of resource constraints. To do this, use a crunch-dispatch-slurm configuration like the following:
CrunchRunCommand: - crunch-run - "-cgroup-parent-subsystem=memory"
The choice of subsystem (“memory” in this example) must correspond to one of the resource types enabled in SLURM’s
cgroup.conf. Limits for other resource types will also be respected. The specified subsystem is singled out only to let Crunch determine the name of the cgroup provided by SLURM. When doing this, you should also set ReserveExtraRAM .
Some versions of Docker (at least 1.9), when run under systemd, require the cgroup parent to be specified as a systemd slice. This causes an error when specifying a cgroup parent created outside systemd, such as those created by SLURM.
You can work around this issue by disabling the Docker daemon’s systemd integration. This makes it more difficult to manage Docker services with systemd, but Crunch does not require that functionality, and it will be able to use SLURM’s cgroups as container parents. To do this, configure the Docker daemon on all compute nodes to run with the option
Older Linux kernels (prior to 3.18) have bugs in network namespace handling which can lead to compute node lockups. This by is indicated by blocked kernel tasks in “Workqueue: netns cleanup_net”. If you are experiencing this problem, as a workaround you can disable use of network namespaces by Docker across the cluster. Be aware this reduces container isolation, which may be a security risk.
CrunchRunCommand: - crunch-run - "-container-enable-networking=always" - "-container-network-mode=host"
If SLURM is unable to run a container, the dispatcher will submit it again after the next PollPeriod. If PollPeriod is very short, this can be excessive. If MinRetryPeriod is set, the dispatcher will avoid submitting the same container to SLURM more than once in the given time span.
Extra RAM to reserve (in bytes) on each SLURM job submitted by Arvados, which is added to the amount specified in the container’s
runtime_constraints. If not provided, the default value is zero. Helpful when using
arv-mount share the control group memory limit with the user process. In this situation, at least 256MiB is recommended to accomodate each container’s
The crunch-dispatch-slurm package includes configuration files for systemd. If you’re using a different init system, you’ll need to configure a service to start and stop a
crunch-dispatch-slurm process as desired. The process should run from a directory where the
crunch user has write permission on all compute nodes, such as its home directory or
/tmp. You do not need to specify any additional switches or environment variables.
Restart the dispatcher to run with your new configuration:
~$ sudo systemctl restart crunch-dispatch-slurm
The content of this documentation is licensed under the
Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the Apache License, Version 2.0.