For Arvados administrators, this page will cover what you need to know and do in order to ensure a smooth upgrade of your Arvados installation. For general release notes covering features added and bugs fixed, see Arvados releases.
Upgrade instructions can be found at Maintenance and upgrading.
Some versions introduce changes that require special attention when upgrading: e.g., there is a new service to install, or there is a change to the default configuration that you might need to override in order to preserve the old behavior. These notes are listed below, organized by release version. Scroll down to the version number you are upgrading to.
Starting from 2.7.4, Arvados no longer supports CentOS. CentOS users should migrate to an Arvados-supported version of Red Hat Enterprise Linux (RHEL), Rocky Linux or AlmaLinux.
There are no other configuration changes requiring administrator attention in this release.
There are no configuration changes requiring administrator attention in this release.
If you use the LSF or Slurm dispatcher, ensure the new API.MaxGatewayTunnels
config entry is high enough to support the size of your cluster. See LSF docs or Slurm docs for details.
Now supports configuration parameter Containers.LSF.MaxRunTimeDefault
as the default value for max_run_time
for containers that do not specify a time limit (using CWL ToolTimeLimit
).
Now supports configuration parameter Containers.LSF.MaxRunTimeOverhead
so that when scheduling_constraints.max_run_time
or MaxRunTimeDefault
are non-zero, this adds time to account for crunch-run startup/shutdown overhead.
The default configuration value API.MaxConcurrentRequests
(the number of concurrent requests that will be processed by a single instance of an arvados service process) is raised from 8 to 64.
A new configuration key API.MaxConcurrentRailsRequests
(default 8) limits the number of concurrent requests processed by a RailsAPI service process.
When scheduling a container, Arvados now considers using instance types other than the lowest-cost type consistent with the container’s resource constraints. If a larger instance is already running and idle, or the cloud provider reports that the optimal instance type is not currently available, Arvados will select a larger instance type, provided the cost does not exceed 1.5x the optimal instance type cost.
This will typically reduce overall latency for containers and reduce instance booting/shutdown overhead, but may increase costs depending on workload and instance availability. To avoid this behavior, configure Containers.MaximumPriceFactor: 1.0
.
The internal communication between keepstore and keep-balance about read-only volumes has changed. After keep-balance is upgraded, old versions of keepstore will be treated as read-only. We recommend upgrading and restarting all keepstore services first, then upgrading and restarting keep-balance.
Starting with Arvados 2.7, a new system for fetching live container logs is in place. This system features significantly reduced database load compared to previous releases. When Workbench or another application needs to access the logs of a process (running or completed), they should use the log endpoint of container_requests which forwards requests to the running container. This supersedes the previous system where compute processes would send all of their logs to the database, which produced significant load.
The legacy logging system is now disabled by default for all installations with the setting Containers.Logging.LimitLogBytesForJob: 0
. If you have an existing Arvados installation where you have customized this value and do not need the legacy container logging system, we recommend removing LimitLogBytesForJob
from your configuration.
If you need to re-enable the legacy logging system, set Containers.Logging.LimitLogBytesForJob
to a positive value (the previous default was Containers.Logging.LimitLogBytesForJob: 67108864
).
The original Arvados Workbench application (referred to as “Workbench 1”) is deprecated and will be removed in a future major version of Arvados. Users are advised to migrate to “Workbench 2”. Starting with this release, new installations of Arvados will only set up Workbench 2 and no longer include Workbench 1 by default.
It is also important to note that Workbench 1 only supports the legacy logging system, which is now disabled by default. If you need to re-enable the legacy logging system, see above.
The domain_name
variable at terraform/vpc/terraform.tfvars
and DOMAIN
variable at local.params
changed their meaning. In previous versions they were used in combination with cluster_name
and CLUSTER
to build the cluster’s domain name (e.g.: cluster_name
.domain_name
). To allow the use of any arbitrary cluster domain, now we don’t enforce using the cluster prefix as part of the domain, so domain_name
and DOMAIN
need to hold the entire domain for the given cluster.
For example, if cluster_name
is set to "xarv1"
and domain_name
was previously set to "example.com"
, it should now be set to "xarv1.example.com"
to keep using the same cluster domain.
The reported number of CPUs available in a container is now formatted in crunchstat.txt
log files and crunchstat-summary
text reports as a floating-point number rather than an integer (2.00 cpus
rather than 2 cpus
). Programs that parse these files may need to be updated accordingly.
In the Users
section of your cluster configuration, there are now several options to control what system resources are or are not managed by arvados-login-sync
. These options all have names that begin with Sync
.
The defaults for all of these options match the previous behavior of arvados-login-sync
except for SyncIgnoredGroups
. This list names groups that arvados-login-sync
will never modify by adding or removing members. As a security precaution, the default list names security-sensitive system groups on Debian- and Red Hat-based distributions. If you are using Arvados to manage system group membership on shell nodes, especially sudo
or wheel
, you may want to provide your own list. Set SyncIgnoredGroups: []
to restore the original behavior of ignoring no groups.
We have introduced a small exception to the previous behavior of Arvados API token scopes in this release. A valid token is now always allowed to issue a request to GET /arvados/v1/api_client_authorizations/current
regardless of its scopes. This allows clients to reliably determine whether a request failed because a token is invalid, or because the token is not permitted to perform a particular request. The API server itself needs to be able to do this to validate tokens issued by other clusters in a federation.
The legacy APIs humans, specimens, traits, jobs, job_tasks, pipeline_instances, pipeline_templates, nodes, repositories, and keep_disks are deprecated and will be removed in a future major version of Arvados.
In addition, the default_owner_uuid
, api_client_id
, and user_id
fields of api_client_authorizations are deprecated and will be removed from api_client_authorization
responses in a future major version of Arvados. This should not affect clients as default_owner_uuid
was never implemented, and api_client_id
and user_id
returned internal ids that were not meaningful or usable with any other API call.
The old “v1” S3 driver for keepstore has been removed. The new “v2” implementation, which has been the default since Arvados 2.5.0, is always used. The Volumes.*.DriverParameters.UseAWSS3v2Driver
configuration key is no longer recognized. If your config file uses it, remove it to avoid warning messages at startup.
The Python SDK has always provided functionality to retry API requests that fail due to temporary problems like network failures, by passing num_retries=N
to a request’s execute()
method. In this release, API client constructor functions like arvados.api
also accept a num_retries
argument. This value is stored on the client object and used as a floor for all API requests made with this client. This allows developers to set their preferred retry strategy once, without having to pass it to each execute()
call.
The default value for num_retries
in API constructor functions is 10. This means that an API request that repeatedly encounters temporary problems may spend up to about 35 minutes retrying in the worst case. We believe this is an appropriate default for most users, where eventual success is a much greater concern than responsiveness. If you have client applications where this is undesirable, update them to pass a lower num_retries
value to the constructor function. You can even pass num_retries=0
to have the API client act as it did before, like this:
import arvados arv_client = arvados.api('v1', num_retries=0, ...)
The first time the Python SDK fetches an Arvados API discovery document, it will ensure that googleapiclient.http
logs are handled so you have a way to know about early problems that are being retried. If you prefer to handle these logs your own way, just ensure that the googleapiclient.http
logger (or a parent logger) has a handler installed before you call any Arvados API client constructor.
This version introduces a new API feature which is used by Workbench 2 to improve page loading performance. To avoid any errors using the new Workbench with an old API server, be sure to upgrade the API server before upgrading Workbench 2.
The migration which de-duplicates permission links has been optimized. We recommend upgrading from 2.5.0 directly to 2.6.1 in order to avoid the slow permission de-deplication migration in 2.6.0.
You should still plan for the arvados-api-server package upgrade to take longer than usual due to the database schema update changing the integer id column in each table from 32-bit to 64-bit.
Ensure your internal keep-web service addresses are listed in the Services.WebDAV.InternalURLs
section of your configuration file, and reachable from controller processes, as noted on the updated install page.
Important! This upgrade includes a database schema update changing the integer id column in each table from 32-bit to 64-bit. Because it touches every row in the table, on moderate to large sized installations this may be very slow (on the order of hours). Plan for the arvados-api-server package upgrade to take longer than usual.
The configuration value API.MaxConcurrentRequests
(the number of concurrent requests that will be accepted by a single instance of arvados-controller) now has a default value of 64, instead of being unlimited.
New configuration value API.LogCreateRequestFraction
(default 0.50) limits requests that post live container logs to the API server, to avoid situations where log messages crowd out other more important requests.
New configuration options CloudVMs.SupervisorFraction
(default 0.30) limits the number of concurrent workflow supervisors, to avoid situations where too many workflow runners crowds out actual workers.
There is a new configuration entry CloudVMs.MaxInstances
(default 64) that limits the number of VMs the cloud dispatcher will run at a time. This may need to be adjusted to suit your anticipated workload.
Using the obsolete configuration entry MaxCloudVMs
, which was previously accepted in config files but not obeyed, will now result in a deprecation warning.
The frequency that keep-balance
will run (Collections.BalancePeriod
) has been changed from every 10 minutes to every 6 hours.
All dispatchers (cloud, LSF, and Slurm) now connect directly to the PostgreSQL database. Make sure these connections are supported by your network firewall rules, PostgreSQL connection settings, and PostgreSQL server configuration (in pg_hba.conf
) as shown in the PostgreSQL install instructions.
If you use OpenID Connect or Google login, and your cluster serves as the LoginCluster
in a federation or your users log in from a web application other than the Workbench1 and Workbench2 ExternalURL
addresses in your configuration file, the additional web application URLs (e.g., the other clusters’ Workbench addresses) must be listed explicitly in Login.TrustedClients
, otherwise login will fail. Previously, login would succeed with a less-privileged token.
A more actively maintained S3 client library is now enabled by default for keeepstore services. The previous driver is still available for use in case of unknown issues. To use the old driver, set DriverParameters.UseAWSS3v2Driver
to false
on the appropriate Volumes
config entries.
Cached copies of log entries from containers that finished more than 1 month ago are now deleted automatically (this only affects the “live” logs saved in the PostgreSQL database, not log collections saved in Keep). If you have an existing cron job that runs rake db:delete_old_container_logs
, you can remove it. See configuration options Containers.Logging.MaxAge
and Containers.Logging.SweepInterval
.
If you manage your cluster using the salt installer, you may want to update it to the latest version, use the appropriate config_examples
subdirectory and re-reploy with your custom local.params
file so that the arvados-controller
’s nginx
configuration file gets fixed.
If you have arvados-login-sync
running on a satellite cluster, please update the environment variable settings by removing the LOGINCLUSTER_ARVADOS_API_*
variables and setting ARVADOS_API_TOKEN
to a LoginCluster’s admin token, as described on the updated install page.
Metrics previously reported by keep-web (arvados_keepweb_collectioncache_requests
, ..._hits
, ..._pdh_hits
, ..._api_calls
, ..._cached_manifests
, and arvados_keepweb_sessions_cached_collection_bytes
) have been replaced with arvados_keepweb_cached_session_bytes
.
The config entries Collections.WebDAVCache.UUIDTTL
, ...MaxCollectionEntries
, and ...MaxUUIDEntries
are no longer used, and should be removed from your config file.
This update only consists of improvements to arvados-cwl-runner
. There are no changes to backend services.
In Arvados 2.4.2 and earlier, when using PAM authentication, if a user presented valid credentials but the account is disabled or otherwise not allowed to access the host, it would still be accepted for access to Arvados. From 2.4.3 onwards, Arvados now also checks that the account is permitted to access the host before completing the PAM login process.
Other authentication methods (LDAP, OpenID Connect) are not affected by this flaw.
GitHub Security Lab (GHSL) reported a remote code execution (RCE) vulnerability in the Arvados Workbench that allows authenticated attackers to execute arbitrary code via specially crafted JSON payloads.
This vulnerability is fixed in 2.4.2 (#19316).
It is likely that this vulnerability exists in all versions of Arvados up to 2.4.1.
This vulnerability is specific to the Ruby on Rails Workbench application (“Workbench 1”). We do not believe any other Arvados components, including the TypesScript browser-based Workbench application (“Workbench 2”) or API Server, are vulnerable to this attack.
As a precaution, Arvados 2.4.2 has includes security updates for Ruby on Rails and the TZInfo Ruby gem. However, there are no known exploits in Arvados based on these CVEs.
There is now a configuration option Workbench.DisableSharingURLsUI
for admins to disable the user interface for “sharing link” feature (URLs which can be sent to users to access the data in a specific collection in Arvados without an Arvados account), for organizations where sharing links violate their data sharing policy.
If you use the Slurm dispatcher (crunch-dispatch-slurm
) you must add a Services.DispatchSLURM.InternalURLs
section to your configuration file, as shown on the updated install page.
We now recommend disabling nginx proxy caching for arvados-controller, to avoid truncation of large responses.
In your Nginx configuration file (/etc/nginx/conf.d/arvados-api-and-controller.conf
), add the following lines to the location /
block with http://controller
(see Update nginx configuration for an example) and reload/restart Nginx (sudo nginx -s reload
).
proxy_max_temp_file_size 0; proxy_request_buffering off; proxy_buffering off; proxy_http_version 1.1;
The compute image build script now installs Singularity 3.9.9 instead of 3.7.4. The newer version includes a bugfix that should resolve intermittent loopback device errors when running containers.
arvados-cwl-runner --create-workflow
and --update-workflow
When using arvados-cwl-runner --create-workflow
or --update-workflow
, by default it will now make a copy of all collection and Docker image dependencies in the target project. Running workflows retains the old behavior (use the dependencies wherever they are found). The can be controlled explicit with --copy-deps
and --no-copy-deps
.
When requesting a list of objects without an explicit order
parameter, the default order has changed from modified_at desc, uuid asc
to modified_at desc, uuid desc
. This means that if two objects have identical modified_at
timestamps, the tiebreaker will now be based on uuid
in decending order where previously it would be ascending order. The practical effect of this should be minor; with microsecond precision it is unusual to have two records with exactly the same timestamp, and order-sensitive queries should already provide an explicit order
parameter.
Ubuntu 18.04 ships with Python 3.6 as the default version of Python 3. Ubuntu also ships a version of Python 3.8, and the Arvados Python packages (python3-arvados-cwl-runner
, python3-arvados-fuse
, python3-arvados-python-client
, python3-arvados-user-activity
and python3-crunchstat-summary
) now depend on the python-3.8
system package.
This means that they are now installed under /usr/share/python3.8
(before, the path was /usr/share/python3
). If you rely on the python3
executable from the packages (e.g. to load a virtualenv), you may need to update the path to that executable.
The minimum supported Ruby version is now 2.6. If you are running Arvados on Debian 10 or Ubuntu 18.04, you may need to switch to using RVM or upgrade your OS. See Install Ruby and Bundler for more information.
The anonymous token configured in Users.AnonymousUserToken
must now be 32 characters or longer. This was already the suggestion in the documentation, now it is enforced. The script/get_anonymous_user_token.rb
script that was needed to register the anonymous user token in the database has been removed. Registration of the anonymous token is no longer necessary.
The Containers.UsePreemptibleInstances
option has been renamed to Containers.AlwaysUsePreemptibleInstances
and has the same behavior when true
and one or more preemptible instances are configured. However, a value of false
no longer disables support for preemptible instances, instead users can now enable use of preemptible instances at the level of an individual workflow or workflow step.
In addition, there is a new configuration option Containers.PreemptiblePriceFactor
will automatically add a preemptible instance type corresponding to each regular instance type. See Using Preemptible instances for details.
If you use LSF and your configuration specifies Containers.LSF.BsubArgumentsList
, you should update it to include the new arguments ("-R", "select[mem>=%MMB]", ...
, see configuration reference). Otherwise, containers that are too big to run on any LSF host will remain in the LSF queue instead of being cancelled.
Arvados now supports requesting NVIDIA CUDA GPUs for cloud and LSF (Slurm is currently not supported). To be able to request GPU nodes, some additional configuration is needed:
Including GPU support in cloud compute node image
Configure cloud dispatcher for GPU support
The permission model has changed such that all role groups are visible to all active users. This enables users to share objects with groups they don’t belong to. To preserve the previous behavior, where role groups are only visible to members and admins, add RoleGroupsVisibleToAll: false
to the Users
section of your configuration file.
Due to a bug in previous versions, the DELETE
operation on a role group caused the group to be flagged as trash in the database, but continue to grant permissions regardless. After upgrading, any role groups that had been trashed this way will be deleted. This might surprise some users if they were relying on permissions that were still in effect due to this bug. Future DELETE
operations on a role group will immediately delete the group and revoke the associated permissions.
When Arvados runs a container via arvados-dispatch-cloud
, the crunch-run
supervisor process now brings up its own keepstore server to handle I/O for mounted collections, outputs, and logs. With the default configuration, the keepstore process allocates one 64 MiB block buffer per VCPU requested by the container. For most workloads this will increase throughput, reduce total network traffic, and make it possible to run more containers at once without provisioning additional keepstore nodes to handle the I/O load.
Containers.LocalKeepBlobBuffersPerVCPU
value.Containers.LocalKeepBlobBuffersPerVCPU
to 0 to disable this feature and preserve the previous behavior of sending container I/O traffic to your separately provisioned keepstore servers.AccessViaHosts
, and no volumes have underlying Replication
less than Collections.DefaultReplication
. If the feature is configured but cannot be enabled due to an incompatible volume configuration, this will be noted in the crunch-run.txt
file in the container log.When a new user is set up (either via AutoSetupNewUsers
config or via Workbench admin interface) the user immediately becomes visible to other users. To revert to the previous behavior, where the administrator must add two users to the same group using the Workbench admin interface in order for the users to see each other, change the new Users.ActivatedUsersAreVisibleToOthers
config to false
.
If your installation uses the vocabulary feature on Workbench2, you will need to update the cluster configuration by moving the vocabulary definition file to the node where controller
runs, and set the API.VocabularyPath
configuration parameter to the local path where the file was placed.
This will enable the vocabulary checking cluster-wide, including Workbench2. The Workbench.VocabularyURL
configuration parameter is deprecated and will be removed in a future release.
You can read more about how this feature works on the admin page.
Ubuntu 18.04 ships with Bundler version 1.16.1, which is no longer compatible with the Gemfiles in the Arvados packages (made with Bundler 2.2.19). The Ubuntu 18.04 packages for arvados-api-server and arvados-workbench now conflict with the ruby-bundler package to work around this issue. The post-install scripts for arvados-api-server and arvados-workbench install the proper version of Bundler as a gem.
update_uuid
endpoint for users.The update_uuid
endpoint was superseded by the link accounts feature, so it’s no longer available.
The ‘@@’ full text search operator, previously deprecated, has been removed. To perform a string search across multiple columns, use the ‘ilike’ operator on ‘any’ column as described in the available list method filter section of the API documentation.
If your configuration uses the StorageClasses attribute on any Keep volumes, you must add a new StorageClasses
section that lists all of your storage classes. Refer to the updated documentation about configuring storage classes for details.
Make sure the keep-balance process can connect to your PostgreSQL server using the settings in your config file. (In previous versions, keep-balance accessed the database through controller instead of connecting to the database server directly.)
The crunch-dispatch-local
dispatcher now reads the API host and token from the system wide /etc/arvados/config.yml
. It will fail to start that file is not found or not readable.
Typically a docker image collection contains a single .tar
file at the top level. Handling of atypical cases has changed. If a docker image collection contains files with extensions other than .tar
, they will be ignored (previously they could cause errors). If a docker image collection contains multiple .tar
files, it will cause an error at runtime, “cannot choose from multiple tar files in image collection” (previously one of the .tar
files was selected). Subdirectories are ignored. The arv keep docker
command always creates a collection with a single .tar
file, and never uses subdirectories, so this change will not affect most users.
If you use the S3 driver for Keep volumes and specify credentials in your configuration file (as opposed to using an IAM role), you should change the spelling of the AccessKey
and SecretKey
config keys to AccessKeyID
and SecretAccessKey
. If you don’t update them, the previous spellings will still be accepted, but warnings will be logged at server startup.
In your Nginx configuration file (/etc/nginx/conf.d/arvados-api-and-controller.conf
), add the following lines to the location /
block with http://controller
(see Update nginx configuration for an example) and reload/restart Nginx (sudo nginx -s reload
).
proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";
preserve_version
attribute semanticsThe preserve_version
attribute on collections was originally designed to allow clients to persist a preexisting collection version. This forced clients to make 2 requests if the intention is to “make this set of changes in a new version that will be kept”, so we have changed the semantics to do just that: When passing preserve_version=true
along with other collection updates, the current version is persisted and also the newly created one will be persisted on the next update.
System services now log a warning at startup if any of the system tokens (ManagementToken
, SystemRootToken
, and Collections.BlobSigningKey
) are less than 32 characters, or contain characters other than a-z, A-Z, and 0-9. After upgrading, run arvados-server config-check
and update your configuration file if needed to resolve any warnings.
The API.RailsSessionSecretToken
configuration key has been removed. Delete this entry from your configuration file after upgrading.
Now that Python 3 is part of the base repository in CentOS 7, the Python 3 dependency for Centos7 Arvados packages was changed from SCL rh-python36 to python3.
The ForceLegacyAPI14 configuration option has been removed. In the unlikely event it is mentioned in your config file, remove it to avoid “deprecated/unknown config” warning logs.
A satellite cluster that delegates its user login to a central user database must only have `Login.LoginCluster` set, or it will return an error. This is a change in behavior, previously it would return an error if another login provider was not configured, even though the provider would never be used.
We no longer publish Python 2 based distribution packages for our Python components. There are equivalent packages based on Python 3, but their names are slightly different. If you were using the Python 2 based packages, you can install the Python 3 based package for a drop in replacement. On Debian and Ubuntu:
apt remove python-arvados-fuse && apt install python3-arvados-fuse apt remove python-arvados-python-client && apt install python3-arvados-python-client apt remove python-arvados-cwl-runner && apt install python3-arvados-cwl-runner apt remove python-crunchstat-summary && apt install python3-crunchstat-summary apt remove python-cwltest && apt install python3-cwltest
On CentOS:
yum remove python-arvados-fuse && yum install python3-arvados-fuse yum remove python-arvados-python-client && yum install python3-arvados-python-client yum remove python-arvados-cwl-runner && yum install python3-arvados-cwl-runner yum remove python-crunchstat-summary && yum install python3-crunchstat-summary yum remove python-cwltest && yum install python3-cwltest
The minimum supported Ruby version is now 2.5. If you are running Arvados on Debian 9 or Ubuntu 16.04, you may need to switch to using RVM or upgrade your OS. See Install Ruby and Bundler for more information.
The Python-based PAM package has been replaced with a version written in Go. See using PAM for authentication for details.
The SSO (single sign-on) component is deprecated and will not be supported in future releases. Existing configurations will continue to work in this release, but you should switch to one of the built-in authentication mechanisms as soon as possible. See setting up web based login for details.
After migrating your configuration, uninstall the arvados-sso-provider
package.
Keepstore now uses V4 signatures by default for S3 requests. If you are using Amazon S3, no action is needed; all regions support V4 signatures. If you are using a different S3-compatible service that does not support V4 signatures, add V2Signature: true
to your volume driver parameters to preserve the old behavior. See configuring S3 object storage.
Some constraints on the permission system have been added, in particular role
and project
group types now have distinct behavior. These constraints were already de-facto imposed by the Workbench UI, so on most installations the only effect of this migration will be to reassign role
groups to the system user and create a can_manage
permission link for the previous owner.
group_class
field must be either role
or project
. Invalid group_class are migrated to role
.role
cannot own things. Anything owned by a role is migrated to a can_manage
link and reassigned to the system user.role
and user
can have outgoing permission links. Permission links originating from projects are deleted by the migration.role
is always owned by the system_user. When a group is created, it creates a can_manage
link for the object that would have been assigned to owner_uuid
. Migration adds can_manage
links and reassigns roles to the system user. This also has the effect of requiring that all role
groups have unique names on the system. If there is a name collision during migration, roles will be renamed to ensure they are unique.name
) updated but not head_uuid
, tail_uuid
or link_class
.The arvados-sync-groups
tool has been updated to reflect these constraints, so it is important to use the version of arvados-sync-groups
that matches the API server version.
Before upgrading, use the following commands to find out which groups and permissions in your database will be automatically modified or deleted during the upgrade.
To determine which groups have invalid group_class
(these will be migrated to role
groups):
arv group list --filters '[["group_class", "not in", ["project", "role"]]]'
To list all role
groups, which will be reassigned to the system user (unless owner_uuid
is already the system user):
arv group list --filters '[["group_class", "=", "role"]]'
To list which project
groups have outgoing permission links (such links are now invalid and will be deleted by the migration):
for uuid in $(arv link list --filters '[["link_class", "=", "permission"], ["tail_uuid", "like", "%-j7d0g-%"]]' | jq -r .items[].tail_uuid | sort | uniq) ; do arv group list --filters '[["group_class", "=", "project"], ["uuid", "=", "'$uuid'"]]' | jq .items done
As a side effect of new permission system constraints, “star” links (indicating shortcuts in Workbench) that were previously owned by “All users” (which is now a “role” and cannot own things) will be migrated to a new system project called “Public favorites” which is readable by the “Anonymous users” role.
Arvados 2.0 is a major upgrade, with many changes. Please read these upgrade notes carefully before you begin.
See Migrating Configuration for notes on migrating legacy per-component configuration files to the new centralized /etc/arvados/config.yml
.
To ensure a smooth transition, the per-component config files continue to be read, and take precedence over the centralized configuration. Your cluster should continue to function after upgrade but before doing the full configuration migration. However, several services (keepstore, keep-web, keepproxy) require a minimal `/etc/arvados/config.yml` to start:
Clusters: zzzzz: Services: Controller: ExternalURL: "https://zzzzz.example.com"
(feature #14714 ) The keep-balance service can now be configured using the centralized configuration file at /etc/arvados/config.yml
. The following command line and configuration options have changed.
You can no longer specify types of keep services to balance via the KeepServiceTypes
config option in the legacy config at /etc/arvados/keep-balance/keep-balance.yml
. If you are still using the legacy config and KeepServiceTypes
has a value other than “disk”, keep-balance will produce an error.
You can no longer specify individual keep services to balance via the config.KeepServiceList
command line option or KeepServiceList
legacy config option. Instead, keep-balance will operate on all keepstore servers with service_type:disk
as reported by the arv keep_service list
command. If you are still using the legacy config, KeepServiceList
should be removed or keep-balance will produce an error.
Please see the config migration guide and keep-balance install guide for more details.
(feature #14712 ) The arv-git-httpd package can now be configured using the centralized configuration file at /etc/arvados/config.yml
. Configuration via individual command line arguments is no longer available. Please see arv-git-httpd’s config migration guide for more details.
keepstore and keep-web no longer support configuration via (previously deprecated) command line configuration flags and environment variables.
keep-web now supports the legacy keep-web.yml
config format (used by Arvados 1.4) and the new cluster config file format. Please check keep-web’s install guide for more details.
keepstore now supports the legacy keepstore.yml
config format (used by Arvados 1.4) and the new cluster config file format. Please check the keepstore config migration notes and keepstore install guide for more details.
(feature #14715 ) Keepproxy can now be configured using the centralized config at /etc/arvados/config.yml
. Configuration via individual command line arguments is no longer available and the DisableGet
, DisablePut
, and PIDFile
configuration options are no longer supported. If you are still using the legacy config and DisableGet
or DisablePut
are set to true or PIDFile
has a value, keepproxy will produce an error and fail to start. Please see keepproxy’s config migration guide for more details.
After all keepproxy and keepstore configurations have been migrated to the centralized configuration file, all keep_services records you added manually during installation should be removed. System logs from keepstore and keepproxy at startup, as well as the output of arvados-server config-check
, will remind you to do this.
$ export ARVADOS_API_HOST=...
$ export ARVADOS_API_TOKEN=...
$ arv --format=uuid keep_service list | xargs -n1 arv keep_service delete --uuid
Once these old records are removed, arv keep_service list
will instead return the services listed under Services/Keepstore/InternalURLs and Services/Keepproxy/ExternalURL in your centralized configuration file.
Feature #15106 improves the speed and functionality of full text search by introducing trigram indexes on text searchable database columns via a migration. Prior to updating, you must first install the postgresql-contrib package on your system and subsequently run the CREATE EXTENSION pg_trgm
SQL command on the arvados_production database as a postgres superuser.
The postgres-contrib package has been supported since PostgreSQL version 9.4. The version of the contrib package should match the version of your PostgreSQL installation. Using 9.5 as an example, the package can be installed and the extension enabled using the following:
Centos 7
~$ sudo yum install -y postgresql95-contrib
~$ su - postgres -c "psql -d 'arvados_production' -c 'CREATE EXTENSION IF NOT EXISTS pg_trgm'"
RHEL 7
~$ sudo yum install -y rh-postgresql95-postgresql-contrib
~$ su - postgres -c "psql -d 'arvados_production' -c 'CREATE EXTENSION IF NOT EXISTS pg_trgm'"
Debian or Ubuntu
~$ sudo apt-get install -y postgresql-contrib-9.5
~$ sudo -u postgres psql -d 'arvados_production' -c 'CREATE EXTENSION IF NOT EXISTS pg_trgm'
Subsequently, the psql -d 'arvados_production' -c '\dx'
command will display the installed extensions for the arvados_production database. This list should now contain pg_trgm
.
Workbench 2 is now ready for regular use. Follow the instructions to install workbench 2
(feature #14151) Workbench2 supports a new vocabulary format and it isn’t compatible with the previous one, please read the metadata vocabulary format admin page for more information.
Node manager is deprecated and replaced by arvados-dispatch-cloud
. No automated config migration is available. Follow the instructions to install the cloud dispatcher
Only one dispatch process should be running at a time. If you are migrating a system that currently runs Node manager and crunch-dispatch-slurm
, it is safest to remove the crunch-dispatch-slurm
service entirely before installing arvados-dispatch-cloud
.
~$ sudo systemctl --now disable crunch-dispatch-slurm
~$ sudo apt-get remove crunch-dispatch-slurm
(task #15133 ) The legacy ‘jobs’ API is now read-only. It has been superceded since Arvados 1.1 by containers / container_requests (aka crunch v2). Arvados installations since the end of 2017 (v1.1.0) have probably only used containers, and are unaffected by this change.
So that older Arvados sites don’t lose access to legacy records, the API has been converted to read-only. Creating and updating jobs (and related types job_task, pipeline_template and pipeline_instance) is disabled and much of the business logic related has been removed, along with various other code specific to the jobs API. Specifically, the following programs associated with the jobs API have been removed: crunch-dispatch.rb
, crunch-job
, crunchrunner
, arv-run-pipeline-instance
, arv-run
.
(issue #15836) By default, Arvados now rejects new names containing the /
character when creating or renaming collections and projects. Previously, these names were permitted, but the resulting objects were invisible in the WebDAV “home” tree. If you prefer, you can restore the previous behavior, and optionally configure a substitution string to make the affected objects accessible via WebDAV. See ForwardSlashNameSubstitution
in the configuration reference.
(bug #15311 ) Strings read from serialized columns in the database with a leading ‘:’ would have the ‘:’ stripped after loading the record. This behavior existed due to legacy serialization behavior which stored Ruby symbols with a leading ‘:’. Unfortunately this corrupted fields where the leading “:” was intentional. This behavior has been removed.
You can test if any records in your database are affected by going to the API server directory and running bundle exec rake symbols:check
. This will report which records contain fields with a leading ‘:’ that would previously have been stripped. If there are records to be updated, you can update the database using bundle exec rake symbols:stringify
.
The API server accepts both PUT and PATCH for updates, but they will be normalized to PATCH by arvados-controller. Scoped tokens should be updated accordingly.
The Python 3 dependency for Centos7 Arvados packages was upgraded from rh-python35 to rh-python36.
As part of story #14484, two new columns were added to the collections table in a database migration. If your installation has a large collections table, this migration may take some time. We’ve seen it take ~5 minutes on an installation with 250k collections, but your mileage may vary.
The new columns are initialized with a zero value. In order to populate them, it is necessary to run a script called populate-file-info-columns-in-collections.rb
from the scripts directory of the API server. This can be done out of band, ideally directly after the API server has been upgraded to v1.4.0.
As a consequence of #14482, the Ruby SDK does a more rigorous collection manifest validation. Collections created after 2015-05 are unlikely to be invalid, however you may check for invalid manifests using the script below.
You could set up a new rvm gemset and install the specific arvados gem for testing, like so:
~$ rvm gemset create rubysdk-test
~$ rvm gemset use rubysdk-test
~$ gem install arvados -v 1.3.1.20190301212059
Next, you can run the following script using admin credentials, it will scan the whole collection database and report any collection that didn’t pass the check:
require 'arvados' require 'arvados/keep' api = Arvados.new offset = 0 batch_size = 100 invalid = [] while true begin req = api.collection.index( :select => [:uuid, :created_at, :manifest_text], :include_trash => true, :include_old_versions => true, :limit => batch_size, :offset => offset) rescue invalid.each {|c| puts "#{c[:uuid]} (Created at #{c[:created_at]}): #{c[:error]}" } raise end req[:items].each do |col| begin Keep::Manifest.validate! col[:manifest_text] rescue Exception => e puts "Collection #{col[:uuid]} manifest not valid" invalid << {uuid: col[:uuid], error: e, created_at: col[:created_at]} end end puts "Checked #{offset} / #{req[:items_available]} - Invalid: #{invalid.size}" offset += req[:limit] break if offset > req[:items_available] end if invalid.empty? puts "No invalid collection manifests found" else invalid.each {|c| puts "#{c[:uuid]} (Created at #{c[:created_at]}): #{c[:error]}" } end
The script will return a final report enumerating any invalid collection by UUID, with its creation date and error message so you can take the proper correction measures, if needed.
As part of story #9945, the distribution packaging (deb/rpm) of our Python packages has changed. These packages now include a built-in virtualenv to reduce dependencies on system packages. We have also stopped packaging and publishing backports for all the Python dependencies of our packages, as they are no longer needed.
One practical consequence of this change is that the use of the Arvados Python SDK (aka “import arvados”) will require a tweak if the SDK was installed from a distribution package. It now requires the loading of the virtualenv environment from our packages. The Install documentation for the Arvados Python SDK reflects this change. This does not affect the use of the command line tools (e.g. arv-get, etc.).
Python scripts that rely on the distribution Arvados Python SDK packages to import the Arvados SDK will need to be tweaked to load the correct Python environment.
This can be done by activating the virtualenv outside of the script:
~$source /usr/share/python2.7/dist/python-arvados-python-client/bin/activate
(python-arvados-python-client) ~$path-to-the-python-script
Or alternatively, by updating the shebang line at the start of the script to:
#!/usr/share/python2.7/dist/python-arvados-python-client/bin/python
As part of story #9945, the distribution packaging (deb/rpm) of our Python packages has changed. The python-arvados-cwl-runner package now includes a version of cwltool. If present, the python-cwltool and cwltool distribution packages will need to be uninstalled before the python-arvados-cwl-runner deb or rpm package can be installed.
As part of story #9945, the Python 3 dependency for Centos7 Arvados packages was upgraded from SCL python33 to rh-python35.
As part of story #9945, it was discovered that the Centos7 package for libpam-arvados was missing a dependency on the python-pam package, which is available from the EPEL repository. The dependency has been added to the libpam-arvados package. This means that going forward, the EPEL repository will need to be enabled to install libpam-arvados on Centos7.
Arvados is migrating to a centralized configuration file for all components. During the migration, legacy configuration files will continue to be loaded. See Migrating Configuration for details.
This release corrects a potential data loss issue, if you are running Arvados 1.3.0 or 1.3.1 we strongly recommended disabling keep-balance
until you can upgrade to 1.3.3 or 1.4.0. With keep-balance disabled, there is no chance of data loss.
We’ve put together a wiki page which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we’ve provided a utility script which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
This release includes several database migrations, which will be executed automatically as part of the API server upgrade. On large Arvados installations, these migrations will take a while. We’ve seen the upgrade take 30 minutes or more on installations with a lot of collections.
The arvados-controller
component now requires the /etc/arvados/config.yml file to be present.
Support for the deprecated “jobs” API is broken in this release. Users who rely on it should not upgrade. This will be fixed in an upcoming 1.3.1 patch release, however users are encouraged to migrate as support for the “jobs” API will be dropped in an upcoming release. Users who are already using the “containers” API are not affected.
There are no special upgrade notes for this release.
previous: Upgrading to 1.1.2 or 1.1.3
It is recommended to regenerate the table statistics for Postgres after upgrading to v1.2.0. If autovacuum is enabled on your installation, this script would do the trick:
#!/bin/bash set -e set -u tables=`echo "\dt" | psql arvados_production | grep public|awk -e '{print $3}'` for t in $tables; do echo "echo 'analyze $t' | psql arvados_production" time echo "analyze $t" | psql arvados_production done
If you also need to do the vacuum, you could adapt the script to run ‘vacuum analyze’ instead of ‘analyze’.
Commit db5107dca adds a new system service, arvados-controller. More detail is available in story #13496.
To add the Arvados Controller to your system please refer to the installation instructions after upgrading your system to 1.2.0.
Verify your setup by confirming that API calls appear in the controller’s logs (e.g., journalctl -fu arvados-controller
) while loading a workbench page.
Secondary files missing from toplevel workflow inputs
This only affects workflows that rely on implicit discovery of secondaryFiles.
If a workflow input does not declare secondaryFiles
corresponding to the secondaryFiles
of workflow steps which use the input, the workflow would inconsistently succeed or fail depending on whether the input values were specified as local files or referenced an existing collection (and whether the existing collection contained the secondary files or not). To ensure consistent behavior, the workflow is now required to declare in the top level workflow inputs any secondaryFiles that are expected by workflow steps.
As an example, the following workflow will fail because the toplevel_input
does not declare the secondaryFiles
that are expected by step_input
:
class: Workflow cwlVersion: v1.0 inputs: toplevel_input: File outputs: [] steps: step1: in: step_input: toplevel_input out: [] run: id: sub class: CommandLineTool inputs: step_input: type: File secondaryFiles: - .idx outputs: [] baseCommand: echo
When run, this produces an error like this:
cwltool ERROR: [step step1] Cannot make job: Missing required secondary file 'hello.txt.idx' from file object: { "basename": "hello.txt", "class": "File", "location": "keep:ade9d0e032044bd7f58daaecc0d06bc6+51/hello.txt", "size": 0, "nameroot": "hello", "nameext": ".txt", "secondaryFiles": [] }
To fix this error, add the appropriate secondaryFiles
section to toplevel_input
class: Workflow
cwlVersion: v1.0
inputs:
toplevel_input:
type: File
secondaryFiles:
- .idx
outputs: []
steps:
step1:
in:
step_input: toplevel_input
out: []
run:
id: sub
class: CommandLineTool
inputs:
step_input:
type: File
secondaryFiles:
- .idx
outputs: []
baseCommand: echo
This bug has been fixed in Arvados release v1.2.0.
Secondary files on default file inputs
File
inputs that have default values and also expect secondaryFiles
and will fail to upload default secondaryFiles
. As an example, the following case will fail:
class: CommandLineTool inputs: step_input: type: File secondaryFiles: - .idx default: class: File location: hello.txt outputs: [] baseCommand: echo
When run, this produces an error like this:
2018-05-03 10:58:47 cwltool ERROR: Unhandled error, try again with --debug for more information: [Errno 2] File not found: u'hello.txt.idx'
To fix this, manually upload the primary and secondary files to keep and explicitly declare secondaryFiles
on the default primary file:
class: CommandLineTool
inputs:
step_input:
type: File
secondaryFiles:
- .idx
default:
class: File
location: keep:4d8a70b1e63b2aad6984e40e338e2373+69/hello.txt
secondaryFiles:
- class: File
location: keep:4d8a70b1e63b2aad6984e40e338e2373+69/hello.txt.idx
outputs: []
baseCommand: echo
This bug has been fixed in Arvados release v1.2.0.
There are no special upgrade notes for this release.
previous: Upgrading to 1.1.0 or 1.1.1
As part of story #11908, commit 8f987a9271 introduces a dependency on Postgres 9.4. Previously, Arvados required Postgres 9.3.
pg_dump
rh-postgresql94
backport package from either Software Collections: http://doc.arvados.org/install/install-postgresql.html or the Postgres developers: https://www.postgresql.org/download/linux/redhat/psql
There are no special upgrade notes for this release.
As part of story #12032, commit 68bdf4cbb1 introduces a dependency on Postgres 9.3. Previously, Arvados required Postgres 9.1.
pg_dump
rh-postgresql94
backport package from either Software Collections: http://doc.arvados.org/install/install-postgresql.html or the Postgres developers: https://www.postgresql.org/download/linux/redhat/psql
As part of story #11807, commit 55aafbb converts old “jobs” database records from YAML to JSON, making the upgrade process slower than usual.
As part of story #9005, commit cb230b0 reduces service discovery overhead in keep-web requests.
As part of story #11349, commit 2c094e2 adds a “management” http server to nodemanager.
[Manage]
address = 127.0.0.1
port = 8989
http://{address}:{port}/status.json
with a summary of how many nodes are in each state (booting, busy, shutdown, etc.)As part of story #10766, commit e8cc0d7 replaces puma with arvados-ws as the recommended websocket server.
Example, with systemd:
$ sudo sv down /etc/sv/puma
$ sudo rm -r /etc/sv/puma
$ systemctl disable puma
$ systemctl stop puma
As part of story #11168, commit 660a614 uses JSON instead of YAML to encode hashes and arrays in the database.
As part of story #10969, commit 74a9dec introduces a Docker image format compatibility check: the arv keep docker
command prevents users from inadvertently saving docker images that compute nodes won’t be able to run.
/etc/arvados/api/application.yml
): docker_image_formats: ["v1"]
docker_image_formats
in /var/www/arvados-api/current/config/application.default.yml
or source:services/api/config/application.default.yml or issue #10969 for more detail.Several Debian and RPM packages — keep-balance (d9eec0b), keep-web (3399e63), keepproxy (6de67b6), and arvados-git-httpd (9e27ddf) — now enable their respective components using systemd. These components prefer YAML configuration files over command line flags (3bbe1cd).
"sudo systemctl enable keep-web; sudo systemctl start keep-web"
."Sep 26 18:23:55 62751f5bb946 keep-web[74]: 2016/09/26 18:23:55 open /etc/arvados/keep-web/keep-web.yml: no such file or directory"
Commits ae72b172c8 and 3aae316c25 change the filesystem location where Python modules and scripts are installed.
/usr/local
(or the equivalent location in a Software Collection). Now they get installed to a path under /usr
. This improves compatibility with other Python packages provided by the distribution. See #9242 for more background.Commit eebcb5e requires the crunchrunner package to be installed on compute nodes and shell nodes in order to run CWL workflows.
sudo apt-get install crunchrunner
sudo yum install crunchrunner
Commit 3c88abd changes the Keep permission signature algorithm.
Commit e1276d6e disables Workbench’s “Getting Started” popup by default.
enable_getting_started_popup: true
in Workbench’s application.yml
configuration.Commit 5590c9ac makes a Keep-backed writable scratch directory available in crunch jobs (see #7751)
Commit 1e2ace5 changes recommended config for keep-web (see #5824)
-attachment-only-host download.ClusterID.example.com
keep_web_download_url
Commit 1d1c6de removes stopped containers (see #7444)
docker run
default to --rm
. If you run arvados-docker-cleaner on a host that does anything other than run crunch-jobs, and you still want to be able to use docker start
, read the new doc page to learn how to turn this off before upgrading.Commit 21006cf adds a new keep-web service (see #5824).
The content of this documentation is licensed under the
Creative
Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the
Apache License, Version 2.0.