Multi host Arvados

  1. Introduction
  2. Hosts preparation
    1. Create a compute image
  3. Multi host install using the provision.sh script
  4. Choose the desired configuration
    1. Multiple hosts / multiple hostnames
    2. Further customization of the installation
  5. Installation order
  6. Run the provision.sh script
  7. Initial user and login
  8. Test the installed cluster running a simple workflow
  9. After the installation

Introduction

Arvados components can be installed in a distributed infrastructure, whether it is an “on-prem” with physical or virtual hosts, or a cloud environment.

As infrastructures vary a great deal from site to site, these instructions should be considered more as ‘guidelines’ than fixed steps to follow.

We provide an installer script that can help you deploy the different Arvados components. At the time of writing, the provided examples are suitable to install Arvados on AWS.

Hosts preparation

In order to run Arvados on a multi-host installation, there are a few requirements that your infrastructure has to fulfill.

These instructions explain how to setup a multi-host environment that is suitable for production use of Arvados.

We suggest distributing the Arvados components in the following way, creating at least 6 hosts:

  1. Database server:
    1. postgresql server
  2. API node:
    1. arvados api server
    2. arvados controller
    3. arvados websocket
    4. arvados cloud dispatcher
  3. WORKBENCH node:
    1. arvados workbench
    2. arvados workbench2
    3. arvados webshell
  4. KEEPPROXY node:
    1. arvados keepproxy
    2. arvados keepweb
  5. KEEPSTOREs (at least 2)
    1. arvados keepstore
  6. SHELL node (optional):
    1. arvados shell

Note that these hosts can be virtual machines in your infrastructure and they don’t need to be physical machines.

Again, if your infrastructure differs from the setup proposed above (ie, using RDS or an existing DB server), remember that you will need to edit the configuration files for the scripts so they work with your infrastructure.

Multi host install using the provision.sh script

This is a package-based installation method. Start with the provision.sh script which is available by cloning the 2.3-release branch from https://git.arvados.org/arvados.git . The provision.sh script and its supporting files can be found in the arvados/tools/salt-install directory in the Arvados git repository.

This procedure will install all the main Arvados components to get you up and running in a multi-host environment.

The provision.sh script will help you deploy Arvados by preparing your environment to be able to run the installer, then running it. The actual installer is located at arvados-formula and will be cloned during the running of the provision.sh script. The installer is built using Saltstack and provision.sh performs the install using master-less mode.

After setting up a few variables in a config file (next step), you’ll be ready to run it and get Arvados deployed.

Create a compute image

In a multi-host installation, containers are dispatched in docker daemons running in the compute instances, which need some special setup. We provide a compute image builder script that you can use to build a template image following these instructions . Once you have that image created, you can use the image ID in the Arvados configuration in the next steps.

Choose the desired configuration

For documentation’s sake, we will use the cluster name arva2 and the domain arv.local. If you don’t change them as required in the next steps, installation won’t proceed.

We will try to provide a few Arvados’ multi host installation configurations examples for different infrastructure providers. Currently only AWS is available but they can be used with almost any provider with little changes.

You need to copy one of the example configuration files and directory, and edit them to suit your needs.

Multiple hosts / multiple hostnames
cp local.params.example.multiple_hosts local.params
cp -r config_examples/multi_host/aws local_config_dir

Edit the variables in the local.params file. Pay attention to the INT_IP, *TOKEN and KEY variables. Those variables will be used to do a search and replace on the pillars/* in place of any matching VARIABLE.

The multi_host example includes Let’s Encrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS’ Route53.

Copy your certificates to the directory specified with the variable CUSTOM_CERTS_DIR in the remote directory where you copied the provision.sh script. The provision script will find the certificates there.

The script expects cert/key files with these basenames (matching the role except for keepweb, which is split in both download / collections):

  • “controller”
  • “websocket”
  • “workbench”
  • “workbench2”
  • “webshell”
  • “download” # Part of keepweb
  • “collections” # Part of keepweb
  • “keepproxy”

E.g. for ‘keepproxy’, the script will look for

${CUSTOM_CERTS_DIR}/keepproxy.crt
${CUSTOM_CERTS_DIR}/keepproxy.key

Make sure that all the FQDNs that you will use for the public-facing applications (API/controller, Workbench, Keepproxy/Keepweb) are reachable.

If you want to use valid certificates provided by Let’s Encrypt, set the variable SSL_MODE=lets-encrypt and make sure that all the FQDNs that you will use for the public-facing applications (API/controller, Workbench, Keepproxy/Keepweb) are reachable.

Further customization of the installation (modifying the salt pillars and states)

You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the pillars/arvados.sls file, where you will need to provide some information that describes your environment.

Any extra state file you add under local_config_dir/states will be added to the salt run and applied to the hosts.

Installation order

A few Arvados nodes need to be installed in certain order. The required order is

  • Database
  • API server
  • The other nodes can be installed in any order after the two above

Run the provision.sh script

When you finished customizing the configuration, you are ready to copy the files to the hosts and run the provision.sh script. The script allows you to specify the role/s a node will have and it will install only the Arvados components required for such role. The general format of the command is:

scp -r provision.sh local* user@host:
# if you use custom certificates (not Let's Encrypt), make sure to copy those too:
# scp -r certs user@host:
ssh user@host sudo ./provision.sh --roles comma,separated,list,of,roles,to,apply

and wait for it to finish.

If everything goes OK, you’ll get some final lines stating something like:

arvados: Succeeded: 109 (changed=9)
arvados: Failed:      0

The distribution of role as described above can be applied running these commands:

Database
scp -r provision.sh local* user@host:
ssh user@host sudo ./provision.sh --config local.params --roles database

API
scp -r provision.sh local* user@host:
ssh user@host sudo ./provision.sh --config local.params --roles api,controller,websocket,dispatcher

Keepstore(s)
scp -r provision.sh local* user@host:
ssh user@host sudo ./provision.sh --config local.params --roles keepstore

Workbench
scp -r provision.sh local* user@host:
ssh user@host sudo ./provision.sh --config local.params --roles workbench,workbench2,webshell

Keepproxy / Keepweb
scp -r provision.sh local* user@host:
ssh user@host sudo ./provision.sh --config local.params --roles keepproxy,keepweb

Shell (here we copy the CLI test workflow too)
scp -r provision.sh local* tests user@host:
ssh user@host sudo ./provision.sh --config local.params --roles shell

Initial user and login

At this point you should be able to log into the Arvados cluster. The initial URL will be:

  • https://workbench.arva2.arv.local

or, in general, the url format will be:

  • https://workbench.<cluster>.<domain>

By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster.

Assuming you didn’t change these values in the local.params file, the initial credentials are:

  • User: ‘admin’
  • Password: ‘password’
  • Email: ‘admin@arva2.arv.local’

Test the installed cluster running a simple workflow

If you followed the instructions above, the provision.sh script saves a simple example test workflow in the /tmp/cluster_tests directory in the shell node. If you want to run it, just ssh to the node, change to that directory and run:

cd /tmp/cluster_tests
sudo /run-test.sh

It will create a test user (by default, the same one as the admin user), upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):

Creating Arvados Standard Docker Images project
Arvados project uuid is 'arva2-j7d0g-0prd8cjlk6kfl7y'
{
 ...
 "uuid":"arva2-o0j2j-n4zu4cak5iifq2a",
 "owner_uuid":"arva2-tpzed-000000000000000",
 ...
}
Uploading arvados/jobs' docker image to the project
2.1.1: Pulling from arvados/jobs
8559a31e96f4: Pulling fs layer
...
Status: Downloaded newer image for arvados/jobs:2.1.1
docker.io/arvados/jobs:2.1.1
2020-11-23 21:43:39 arvados.arv_put[32678] INFO: Creating new cache file at /home/vagrant/.cache/arvados/arv-put/c59256eda1829281424c80f588c7cc4d
2020-11-23 21:43:46 arvados.arv_put[32678] INFO: Collection saved as 'Docker image arvados jobs:2.1.1 sha256:0dd50'
arva2-4zz18-1u5pvbld7cvxuy2
Creating initial user ('admin')
Setting up user ('admin')
{
 "items":[
  {
   ...
   "owner_uuid":"arva2-tpzed-000000000000000",
   ...
   "uuid":"arva2-o0j2j-1ownrdne0ok9iox"
  },
  {
   ...
   "owner_uuid":"arva2-tpzed-000000000000000",
   ...
   "uuid":"arva2-o0j2j-1zbeyhcwxc1tvb7"
  },
  {
   ...
   "email":"admin@arva2.arv.local",
   ...
   "owner_uuid":"arva2-tpzed-000000000000000",
   ...
   "username":"admin",
   "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
   ...
  }
 ],
 "kind":"arvados#HashList"
}
Activating user 'admin'
{
 ...
 "email":"admin@arva2.arv.local",
 ...
 "username":"admin",
 "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
 ...
}
Running test CWL workflow
INFO /usr/bin/cwl-runner 2.1.1, arvados-python-client 2.1.1, cwltool 3.0.20200807132242
INFO Resolved 'hasher-workflow.cwl' to 'file:///tmp/cluster_tests/hasher-workflow.cwl'
...
INFO Using cluster arva2 (https://arva2.arv.local:8443/)
INFO Upload local files: "test.txt"
INFO Uploaded to ea34d971b71d5536b4f6b7d6c69dc7f6+50 (arva2-4zz18-c8uvwqdry4r8jao)
INFO Using collection cache size 256 MiB
INFO [container hasher-workflow.cwl] submitted container_request arva2-xvhdp-v1bkywd58gyocwm
INFO [container hasher-workflow.cwl] arva2-xvhdp-v1bkywd58gyocwm is Final
INFO Overall process status is success
INFO Final output collection d6c69a88147dde9d52a418d50ef788df+123
{
    "hasher_out": {
        "basename": "hasher3.md5sum.txt",
        "class": "File",
        "location": "keep:d6c69a88147dde9d52a418d50ef788df+123/hasher3.md5sum.txt",
        "size": 95
    }
}
INFO Final process status is success

After the installation

Once the installation is complete, it is recommended to keep a copy of your local configuration files. Committing them to version control is a good idea.

Re-running the Salt-based installer is not recommended for maintaining and upgrading Arvados, please see Maintenance and upgrading for more information.


Previous: Single host Arvados Next: Arvados on Kubernetes

The content of this documentation is licensed under the Creative Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the Apache License, Version 2.0.