The dispatcher normally runs on the same host/VM as the API server.
Crunch dispatches work from the job queue on the Arvados API server. Before you start installing the Crunch dispatcher, now’s a good time to check that the API server and Git server can coordinate to create job records. Run these commands on your shell server to create a collection, and a job to calculate the MD5 checksum of every file in it:
~$ echo 'Hello, Crunch!' | arv-put --portable-data-hash -
…
d40c7f35d80da669afb9db1896e760ad+49
~$ read -rd $'\000' newjob <<EOF; arv job create --job "$newjob"
{"script_parameters":{"input":"d40c7f35d80da669afb9db1896e760ad+49"},
"script_version":"0988acb472849dc0",
"script":"hash",
"repository":"arvados"}
EOF
If you get the error
ArgumentError: Specified script_version does not resolve to a commit
it often means that the API server can’t read the specified repository—either because it doesn’t exist, or because the user running the API server doesn’t have permission to read the repository files. Check the API server’s log (/var/www/arvados-api/current/log/production.log
) for details, and double-check the instructions in the Git server installation guide.
If everything goes well, the API server should create a job record, and your arv
command will output the JSON for that record. It should have state Queued
and script_version 0988acb472849dc08d576ee40493e70bde2132ca
. If the job JSON includes those fields, you can proceed to install the Crunch dispatcher and a compute node. This job will remain queued until you install those services.
Install the Perl SDK on the controller.
Install the Python SDK and CLI tools on controller and all compute nodes.
On the API server, install SLURM and munge, and generate a munge key.
On Debian-based systems:
~$ sudo /usr/bin/apt-get install slurm-llnl munge
~$ sudo /usr/sbin/create-munge-key
On Red Hat-based systems:
~$ sudo yum install slurm munge slurm-munge
Now we need to give SLURM a configuration file. On Debian-based systems, this is installed at /etc/slurm-llnl/slurm.conf
. On Red Hat-based systems, this is installed at /etc/slurm/slurm.conf
. Here’s an example slurm.conf
:
ControlMachine=uuid_prefix.your.domain SlurmctldPort=6817 SlurmdPort=6818 AuthType=auth/munge StateSaveLocation=/tmp SlurmdSpoolDir=/tmp/slurmd SwitchType=switch/none MpiDefault=none SlurmctldPidFile=/var/run/slurmctld.pid SlurmdPidFile=/var/run/slurmd.pid ProctrackType=proctrack/pgid CacheGroups=0 ReturnToService=2 TaskPlugin=task/affinity # # TIMERS SlurmctldTimeout=300 SlurmdTimeout=300 InactiveLimit=0 MinJobAge=300 KillWait=30 Waittime=0 # # SCHEDULING SchedulerType=sched/backfill SchedulerPort=7321 SelectType=select/linear FastSchedule=0 # # LOGGING SlurmctldDebug=3 #SlurmctldLogFile= SlurmdDebug=3 #SlurmdLogFile= JobCompType=jobcomp/none #JobCompLoc= JobAcctGatherType=jobacct_gather/none # # COMPUTE NODES NodeName=DEFAULT PartitionName=DEFAULT MaxTime=INFINITE State=UP NodeName=compute[0-255] PartitionName=compute Nodes=compute[0-255] Default=YES Shared=YES
Whenever you change this file, you will need to update the copy on every compute node as well as the controller node, and then run sudo scontrol reconfigure
.
ControlMachine
should be a DNS name that resolves to the SLURM controller (dispatch/API server). This must resolve correctly on all SLURM worker nodes as well as the controller itself. In general SLURM is very sensitive about all of the nodes being able to communicate with the controller and one another, all using the same DNS names.
NodeName=compute[0-255]
establishes that the hostnames of the worker nodes will be compute0, compute1, etc. through compute255.
compute[0-9,80,100-110]
. See the “hostlist” discussion in the slurm.conf(5)
and scontrol(1)
man pages for more information.slurm.conf
updates and use of scontrol reconfigure
.Each hostname in slurm.conf
must also resolve correctly on all SLURM worker nodes as well as the controller itself. Furthermore, the hostnames used in the configuration file must match the hostnames reported by hostname
or hostname -s
on the nodes themselves. This applies to the ControlMachine as well as the worker nodes.
For example:
slurm.conf
on control and worker nodes: ControlMachine=uuid_prefix.your.domain
slurm.conf
on control and worker nodes: NodeName=compute[0-255]
/etc/resolv.conf
on control and worker nodes: search uuid_prefix.your.domain
hostname
reports uuid_prefix.your.domain
hostname
reports compute123.uuid_prefix.your.domain
If your worker node bootstrapping script (see Installing a compute node) does not send the worker’s current hostname, the API server will choose an unused hostname from the set given in application.yml
, which defaults to compute[0-255]
.
If it is not feasible to give your compute nodes hostnames like compute0, compute1, etc., you can accommodate other naming schemes with a bit of extra configuration.
If you want Arvados to assign names to your nodes with a different consecutive numeric series like {worker1-0000, worker1-0001, worker1-0002}
, add an entry to application.yml
; see /var/www/arvados-api/current/config/application.default.yml
for details. Example:
application.yml
: assign_node_hostname: worker1-%<slot_number>04d
slurm.conf
: NodeName=worker1-[0000-0255]
If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script (see Installing a compute node) send its current hostname, rather than expect Arvados to assign one.
application.yml
: assign_node_hostname: false
slurm.conf
: NodeName=alice,bob,clay,darlene
If your worker hostnames are already assigned by other means, but the full set of names is not known in advance, you can use the slurm.conf
and application.yml
settings in the previous example, but you must also update slurm.conf
(both on the controller and on all worker nodes) and run sudo scontrol reconfigure
whenever a new node comes online.
In your API server’s application.yml
configuration file, add the line crunch_job_wrapper: :slurm_immediate
under the appropriate section. (The second colon is not a typo. It denotes a Ruby symbol.)
Run sudo adduser crunch
. The crunch user should have the same UID, GID, and home directory on all compute nodes and on the dispatcher (API server).
To dispatch Arvados jobs:
crunch-dispatch.rb
must be running.crunch-job
needs the installation path of the Perl SDK in its PERLLIB
.crunch-job
needs the ARVADOS_API_HOST
(and, if necessary, ARVADOS_API_HOST_INSECURE
) environment variable set.Install runit to monitor the Crunch dispatch daemon.
On Debian-based systems:
~$ sudo apt-get install runit
On Red Hat-based systems:
~$ sudo yum install runit
Install the script below as the run script for the Crunch dispatch service, modifying it as directed by the comments.
#!/bin/sh
set -e
rvmexec=""
## Uncomment this line if you use RVM:
#rvmexec="/usr/local/rvm/bin/rvm-exec default"
export ARVADOS_API_HOST=uuid_prefix.your.domain
export CRUNCH_DISPATCH_LOCKFILE=/var/lock/crunch-dispatch
export HOME=$(pwd)
export RAILS_ENV=production
## Uncomment and edit this line if your compute nodes have cgroup info
## somewhere other than /sys/fs/cgroup (e.g., "/cgroup" for CentOS 7)
#export CRUNCH_CGROUP_ROOT="/sys/fs/cgroup"
## Uncomment this line if your cluster uses self-signed SSL certificates:
#export ARVADOS_API_HOST_INSECURE=yes
# This is the path to docker on your compute nodes. You might need to
# change it to "docker", "/opt/bin/docker", etc.
export CRUNCH_JOB_DOCKER_BIN=docker.io
fuser -TERM -k $CRUNCH_DISPATCH_LOCKFILE || true
cd /var/www/arvados-api/current
exec $rvmexec bundle exec ./script/crunch-dispatch.rb 2>&1
The content of this documentation is licensed under the
Creative
Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the
Apache License, Version 2.0.