Keepstore provides access to underlying storage for reading and writing content-addressed blocks, with enforcement of Arvados permissions. Keepstore supports a variety of cloud object storage and POSIX filesystems for its backing store.
In the steps below, you will configure a number of backend storage volumes (like local filesystems and S3 buckets) and specify which keepstore servers have read-only and read-write access to which volumes.
It is possible to configure arbitrary server/volume layouts. However, in order to provide good performance and efficient use of storage resources, we strongly recommend using one of the following layouts:
We recommend starting off with two Keepstore servers. Exact server specifications will be site and workload specific, but in general keepstore will be I/O bound and should be set up to maximize aggregate bandwidth with compute nodes. To increase capacity (either space or throughput) it is straightforward to add additional servers, or (in cloud environments) to increase the machine size of the existing servers.
By convention, we use the following hostname pattern:
Hostname |
---|
keep0.ClusterID.example.com |
keep1.ClusterID.example.com |
Keepstore servers should not be directly accessible from the Internet (they are accessed via keepproxy), so the hostnames only need to resolve on the private network.
Fill in the Volumes
section of config.yml
for each storage volume. Available storage volume types include POSIX filesystems and cloud object storage. It is possible to have different volume types in the same cluster.
There are a number of general configuration parameters for Keepstore. They are described in the configuration reference. In particular, you probably want to change API/MaxKeepBlobBuffers
to align Keepstore’s memory usage with the available memory on the machine that hosts it.
Add each keepstore server to the Services.Keepstore
section of /etc/arvados/config.yml
.
Services:
Keepstore:
# No ExternalURL because they are only accessed by the internal subnet.
InternalURLs:
"http://keep0.ClusterID.example.com:25107": {}
"http://keep1.ClusterID.example.com:25107": {}
# and so forth
# yum install keepstore
# apt-get install keepstore
# systemctl enable --now keepstore
# systemctl status keepstore
[...]
If systemctl status
indicates it is not running, use journalctl
to check logs for errors:
# journalctl -n12 --unit keepstore
Make sure the cluster config file is up to date on the API server host then restart the API server and controller processes to ensure the configuration changes are visible to the whole cluster.
# systemctl restart nginx arvados-controller
Log into a host that is on your private Arvados network. The host should be able to contact your your keepstore servers (eg keep[0-9].ClusterID.example.com).
ARVADOS_API_HOST
and ARVADOS_API_TOKEN
must be set in the environment.
ARVADOS_API_HOST
should be the hostname of the API server.
ARVADOS_API_TOKEN
should be the system root token.
Install the Command line SDK
Check that the keepstore server is in the keep_service
“accessible” list:
$ arv keep_service accessible
[...]
If keepstore does not show up in the “accessible” list, and you are accessing it from within the private network, check that you have properly configured the geo
block for the API server .
Next, install the Python SDK
You should now be able to use arv-put
to upload collections and arv-get
to fetch collections. Be sure to execute this from inside the cluster’s private network. You will be able to access keep from outside the private network after setting up keepproxy .
Liquid error: No such template ‘arv_put_example’
On its own, a keepstore server never deletes data. Instead, the keep-balance service determines which blocks are candidates for deletion and instructs the keepstore to move those blocks to the trash. Please see the Balancing Keep servers for more details.
The content of this documentation is licensed under the
Creative
Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the
Apache License, Version 2.0.