Before attempting installation, you should begin by reviewing supported platforms, choosing backends for identity, storage, and scheduling, and decide how you will distribute Arvados services onto machines. You should also choose an Arvados Cluster ID, choose your hostnames, and aquire TLS certificates. It may be helpful to make notes as you go along using one of these worksheets: New cluster checklist for AWS – New cluster checklist for Azure – New cluster checklist for on premises Slurm
The installation guide describes how to set up a basic standalone Arvados instance. Additional configuration for features including federation, collection versioning, managed properties, and storage classes are described in the Admin guide.
The Arvados storage subsystem is called “keep”. The compute subsystem is called “crunch”.
|Distribution||State||Last supported version|
|Debian 10 (“buster”)||Supported||Latest|
|Ubuntu 20.04 (“focal”)||Supported||Latest|
|Ubuntu 18.04 (“bionic”)||Supported||Latest|
|Ubuntu 16.04 (“xenial”)||EOL||2.1.2|
|Debian 9 (“stretch”)||EOL||2.1.2|
|Debian 8 (“jessie”)||EOL||1.4.3|
|Ubuntu 14.04 (“trusty”)||EOL||1.4.3|
|Ubuntu 12.04 (“precise”)||EOL||8ed7b6dd5d4df93a3f37096afe6d6f81c2a7ef6e (2017-05-03)|
|Debian 7 (“wheezy”)||EOL||997479d1408139e96ecdb42a60b4f727f814f6c9 (2016-12-28)|
|CentOS 6||EOL||997479d1408139e96ecdb42a60b4f727f814f6c9 (2016-12-28)|
Arvados packages are published for current Debian releases (until the EOL date), current Ubuntu LTS releases (until the end of standard support), and the latest version of CentOS.
Arvados consists of many components, some of which may be omitted (at the cost of reduced functionality.) It may also be helpful to review the Arvados Architecture to understand how these components interact.
|PostgreSQL database||Stores data for the API server.||Required.|
|API server||Core Arvados logic for managing users, groups, collections, containers, and enforcing permissions.||Required.|
|Keepstore||Stores content-addressed blocks in a variety of backends (local filesystem, cloud object storage).||Required.|
|Keepproxy||Gateway service to access keep servers from external networks.||Required to be able to use arv-put, arv-get, or arv-mount outside the private Arvados network.|
|Keep-web||Gateway service providing read/write HTTP and WebDAV support on top of Keep.||Required to access files from Workbench.|
|Keep-balance||Storage cluster maintenance daemon responsible for moving blocks to their optimal server location, adjusting block replication levels, and trashing unreferenced blocks.||Required to free deleted data from underlying storage, and to ensure proper replication and block distribution (including support for storage classes).|
|Workbench, Workbench2||Primary graphical user interface for working with file collections and running containers.||Optional. Depends on API server, keep-web, websockets server.|
|Workflow Composer||Graphical user interface for editing Common Workflow Language workflows.||Optional. Depends on git server (arv-git-httpd).|
|Websockets server||Event distribution server.||Required to view streaming container logs in Workbench.|
|Shell server||Synchronize (create/delete/configure) Unix shell accounts with Arvados users.||Optional.|
|Git server||Arvados-hosted git repositories, with Arvados-token based authentication.||Optional, but required by Workflow Composer.|
|Crunch (running containers)|
|arvados-dispatch-cloud||Allocate and free cloud VM instances on demand based on workload.||Optional, not needed for a static Slurm cluster such as on-premises HPC.|
|crunch-dispatch-slurm||Run analysis workflows using Docker containers distributed across a Slurm cluster.||Optional, not needed for a Cloud installation, or if you wish to use Arvados for data management only.|
Choose which backend you will use to authenticate users.
Arvados works well with a standalone PostgreSQL installation. When deploying on AWS, Aurora RDS also works but Aurora Serverless is not recommended.
Choose which backend you will use for storing and retrieving content-addressed Keep blocks.
You should also determine the desired replication factor for your data. A replication factor of 1 means only a single copy of a given data block is kept. With a conventional file system backend and a replication factor of 1, a hard drive failure is likely to lose data. For this reason the default replication factor is 2 (two copies are kept).
A backend may have its own replication factor (such as durability guarantees of cloud buckets) and Arvados will take this into account when writing a new data block.
Choose which backend you will use to schedule computation.
arvados-dispatch-cloudto manage the full lifecycle of cloud compute nodes: starting up nodes sized to the container request, executing containers on those nodes, and shutting nodes down when no longer needed.
crunch-dispatch-slurmto execute containers with slurm job submissions.
crunch-dispatch-localto execute containers directly.
Choose how to allocate Arvados services to machines. We recommend that each machine start with a clean installation of a supported GNU/Linux distribution.
For a production installation, this is a reasonable starting point:
|Function||Number of nodes||Recommended specs|
|PostgreSQL database, Arvados API server, Arvados controller, Git, Websockets, Container dispatcher||1||16+ GiB RAM, 4+ cores, fast disk for database|
|Workbench, Keepproxy, Keep-web, Keep-balance||1||8 GiB RAM, 2+ cores|
|Keepstore servers 1||2+||4 GiB RAM|
|Compute worker nodes 1||0+||Depends on workload; scaled dynamically in the cloud|
|User shell nodes 2||0+||Depends on workload|
1 Should be scaled up as needed
2 Refers to shell nodes managed by Arvados, that provide ssh access for users to interact with Arvados at the command line. Optional.
For a small demo installation, it is possible to run all the Arvados services on a single node. Special considerations for single-node installs will be noted in boxes like this.
Each Arvados installation is identified by a cluster identifier, which is a unique 5-character lowercase alphanumeric string. There are 36 5 = 60466176 possible cluster identifiers.
Here is one way to make a random 5-character string:
~$ tr -dc 0-9a-z </dev/urandom | head -c5; echo
You may also use a different method to pick the cluster identifier. The cluster identifier will be part of the hostname of the services in your Arvados cluster. The rest of this documentation will refer to it as your
ClusterID appears in a configuration example, replace it with your five-character cluster identifier.
The following services are normally public-facing and require DNS entries and corresponding TLS certificates. Get certificates from your preferred TLS certificate provider. We recommend using Let’s Encrypt. You can run several services on the same node, but each distinct DNS name requires a valid, matching TLS certificate.
This guide uses the following DNS name conventions. A later part of this guide will describe how to set up Nginx virtual hosts.
|Arvados Git server||git.
|Arvados Websockets endpoint||ws.
|Arvados Workbench 2||workbench2.
|Arvados Keepproxy server||keep.
|Arvados Keep-web server||download.
Setting up Arvados is easiest when Wildcard TLS and wildcard DNS are available. It is also possible to set up Arvados without wildcard TLS and DNS, but not having a wildcard for
keep-web (i.e. not having *.collections.
ClusterID.example.com) comes with a tradeoff: it will disable some features that allow users to view Arvados-hosted data in their browsers. More information on this tradeoff caused by the CORS rules applied by modern browsers is available in the keep-web URL pattern guide.
The table below lists the required TLS certificates and DNS names in each scenario.
|Wildcard TLS and DNS available||Wildcard TLS available||Other|
It is also possible to create your own certificate authority, issue server certificates, and install a custom root certificate in the browser. This is out of scope for this guide.
The content of this documentation is licensed under the
Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the Apache License, Version 2.0.