This is a legacy API. This endpoint is deprecated, disabled by default in new installations, and slated to be removed entirely in a future major release of Arvados. It is replaced by container requests.
API endpoint base: https://pirca.arvadosapi.com/arvados/v1/jobs
Object type: 8i9sb
Example UUID: zzzzz-8i9sb-0123456789abcde
A job describes a work order to be executed by the Arvados cluster.
Each job has, in addition to the Common resource fields:
Attribute | Type | Description | Notes |
---|---|---|---|
script | string | The filename of the job script. | This program will be invoked by Crunch for each job task. It is given as a path to an executable file, relative to the /crunch_scripts directory in the Git tree specified by the repository and script_version attributes. |
script_parameters | hash | The input parameters for the job. | Conventionally, one of the parameters is called "input" . Typically, some parameter values are collection UUIDs. Ultimately, though, the significance of parameters is left entirely up to the script itself. |
repository | string | Git repository name or URL. | Source of the repository where the given script_version is to be found. This can be given as the name of a locally hosted repository, or as a publicly accessible URL starting with git:// , http:// , or https:// .Examples: yourusername/yourrepo https://github.com/arvados/arvados.git |
script_version | string | Git commit | During a create transaction, this is the Git branch, tag, or hash supplied by the client. Before the job starts, Arvados updates it to the full 40-character SHA-1 hash of the commit used by the job. See Specifying Git versions below for more detail about acceptable ways to specify a commit. |
cancelled_by_client_uuid | string | API client ID | Is null if job has not been cancelled |
cancelled_by_user_uuid | string | Authenticated user ID | Is null if job has not been cancelled |
cancelled_at | datetime | When job was cancelled | Is null if job has not been cancelled |
started_at | datetime | When job started running | Is null if job has not [yet] started |
finished_at | datetime | When job finished running | Is null if job has not [yet] finished |
running | boolean | Whether the job is running | |
success | boolean | Whether the job indicated successful completion | Is null if job has not finished |
is_locked_by_uuid | string | UUID of the user who has locked this job | Is null if job is not locked. The system user locks the job when starting the job, in order to prevent job attributes from being altered. |
node_uuids | array | List of UUID strings for node objects that have been assigned to this job | |
log | string | Collection UUID | Is null if the job has not finished. After the job runs, the given collection contains a text file with log messages provided by the arv-crunch-job task scheduler as well as the standard error streams provided by the task processes. |
tasks_summary | hash | Summary of task completion states. | Example: {"done":0,"running":4,"todo":2,"failed":0} |
output | string | Collection UUID | Is null if the job has not finished. |
nondeterministic | boolean | The job is expected to produce different results if run more than once. | If true, this job will not be considered as a candidate for automatic re-use when submitting subsequent identical jobs. |
submit_id | string | Unique ID provided by client when job was submitted | Optional. This can be used by a client to make the jobs.create method idempotent. |
priority | string | ||
arvados_sdk_version | string | Git commit hash that specifies the SDK version to use from the Arvados repository | This is set by searching the Arvados repository for a match for the arvados_sdk_version runtime constraint. |
docker_image_locator | string | Portable data hash of the collection that contains the Docker image to use | This is set by searching readable collections for a match for the docker_image runtime constraint. |
runtime_constraints | hash | Constraints that must be satisfied by the job/task scheduler in order to run the job. | See below. |
components | hash | Name and uuid pairs representing the child work units of this job. The uuids can be of different object types. | Example components hash: {"name1": "zzzzz-8i9sb-xyz...", "name2": "zzzzz-d1hrv-xyz...",} |
The script_version attribute and arvados_sdk_version runtime constraint are typically given as a branch, tag, or commit hash, but there are many more ways to specify a Git commit. The “specifying revisions” section of the gitrevisions manual page has a definitive list. Arvados accepts Git versions in any format listed there that names a single commit (not a tree, a blob, or a range of commits). However, some kinds of names can be expected to resolve differently in Arvados than they do in your local repository. For example, HEAD@{1}
refers to the local reflog, and origin/main
typically refers to a remote branch: neither is likely to work as desired if given as a Git version.
Key | Type | Description | Implemented |
---|---|---|---|
arvados_sdk_version | string | The Git version of the SDKs to use from the Arvados git repository. See Specifying Git versions for more detail about acceptable ways to specify a commit. If you use this, you must also specify a docker_image constraint (see below). In order to install the Python SDK successfully, Crunch must be able to find and run virtualenv inside the container. |
✓ |
docker_image | string | The Docker image that this Job needs to run. If specified, Crunch will create a Docker container from this image, and run the Job’s script inside that. The Keep mount and work directories will be available as volumes inside this container. The image must be uploaded to Arvados using arv keep docker . You may specify the image in any format that Docker accepts, such as arvados/jobs , debian:latest , or the Docker image id. Alternatively, you may specify the portable data hash of the image Collection. |
✓ |
min_nodes | integer | ✓ | |
max_nodes | integer | ||
min_cores_per_node | integer | Require that each node assigned to this Job have the specified number of CPU cores | ✓ |
min_ram_mb_per_node | integer | Require that each node assigned to this Job have the specified amount of real memory (in MiB) | ✓ |
min_scratch_mb_per_node | integer | Require that each node assigned to this Job have the specified amount of scratch storage available (in MiB) | ✓ |
max_tasks_per_node | integer | Maximum simultaneous tasks on a single node | ✓ |
keep_cache_mb_per_task | integer | Size of file data buffer for per-task Keep directory ($TASK_KEEPMOUNT), in MiB. Default is 256 MiB. Increase this to reduce cache thrashing in situtations such as accessing multiple large (64+ MiB) files at the same time, or accessing different parts of a large file at the same time. | ✓ |
min_ram_per_task | integer | Minimum real memory (KiB) per task |
See Common resource methods for more information about create
, delete
, get
, list
, and update
.
Required arguments are displayed in green.
Cancel a job that is queued or running.
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
uuid | string | path |
Create a new Job.
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
job | object | Job resource | request body | |
minimum_script_version | string | Git branch, tag, or commit hash specifying the minimum acceptable script version (earliest ancestor) to consider when deciding whether to re-use a past job.1 | query | "c3e86c9" |
exclude_script_versions | array of strings | Git commit branches, tags, or hashes to exclude when deciding whether to re-use a past job. | query | ["8f03c71","8f03c71"] ["badtag1","badtag2"] |
filters | array of arrays | Conditions to find Jobs to reuse. | query | |
find_or_create | boolean | Before creating, look for an existing job that has identical script, script_version, and script_parameters to those in the present job, has nondeterministic=false, and did not fail (it could be queued, running, or completed). If such a job exists, respond with the existing job instead of submitting a new one. | query | false |
When a job is submitted to the queue using the create method, the script_version
attribute is updated to a full 40-character Git commit hash based on the current content of the specified repository. If script_version
cannot be resolved, the job submission is rejected.
1 See the note about specifying Git commits for more detail.
Special filter operations are available for specific Job columns.
script_version
in git
REFSPEC
, arvados_sdk_version
in git
REFSPEC
REFSPEC
to a list of Git commits, and match jobs with a script_version
or arvados_sdk_version
in that list. When creating a job and filtering script_version
, the search will find commits between REFSPEC
and the submitted job’s script_version
; all other searches will find commits between REFSPEC
and HEAD. This list may include parallel branches if there is more than one path between REFSPEC
and the end commit in the graph. Use not in
or not in git
filters (below) to blacklist specific commits.script_version
not in git
REFSPEC
, arvados_sdk_version
not in git
REFSPEC
REFSPEC
to a list of Git commits, and match jobs with a script_version
or arvados_sdk_version
not in that list.docker_image_locator
in docker
SEARCH
SEARCH
can be a Docker image hash, a repository name, or a repository name and tag separated by a colon (:
). The server will find collections that contain a Docker image that match that search criteria, then match jobs with a docker_image_locator
in that list.docker_image_locator
not in docker
SEARCH
in docker
filter.Because Arvados records the exact version of the script, input parameters, and runtime environment that was used to run the job, if the script is deterministic (meaning that the same code version is guaranteed to produce the same outputs from the same inputs) then it is possible to re-use the results of past jobs, and avoid re-running the computation to save time. Arvados uses the following algorithm to determine if a past job can be re-used:
find_or_create
is false or omitted, create a new job and skip the rest of these steps.filters
are specified, find jobs that match those filters. If any filters are given, there must be at least one filter on the repository
attribute and one on the script
attribute: otherwise an error is returned.filters
are not specified, find jobs with the same repository
and script
, with a script_version
between minimum_script_version
and script_version
inclusively (excluding excluded_script_versions
), and a docker_image_locator
with the latest Collection that matches the submitted job’s docker_image
constraint. If the submitted job includes an arvados_sdk_version
constraint, jobs must have an arvados_sdk_version
between that refspec and HEAD to be found. This form is deprecated: use filters instead.Run the script “crunch_scripts/hash.py” in the repository “you” using the “main” commit. Arvados should re-use a previous job if the script_version of the previous job is the same as the current “main” commit. This works irrespective of whether the previous job was submitted using the name “main”, a different branch name or tag indicating the same commit, a SHA-1 commit hash, etc.
{ "job": { "script": "hash.py", "repository": "you/you", "script_version": "main", "script_parameters": { "input": "c1bad4b39ca5a924e481008009d94e32+210" } }, "find_or_create": true }
Run using exactly the version “d00220fb38d4b85ca8fc28a8151702a2b9d1dec5”. Arvados should re-use a previous job if the “script_version” of that job is also “d00220fb38d4b85ca8fc28a8151702a2b9d1dec5”.
{ "job": { "script": "hash.py", "repository": "you/you", "script_version": "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5", "script_parameters": { "input": "c1bad4b39ca5a924e481008009d94e32+210" } }, "find_or_create": true }
Arvados should re-use a previous job if the “script_version” of the previous job is between “earlier_version_tag” and the “main” commit (inclusive), but not the commit indicated by “blacklisted_version_tag”. If there are no previous jobs matching these criteria, run the job using the “main” commit.
{ "job": { "script": "hash.py", "repository": "you/you", "script_version": "main", "script_parameters": { "input": "c1bad4b39ca5a924e481008009d94e32+210" } }, "minimum_script_version": "earlier_version_tag", "exclude_script_versions": ["blacklisted_version_tag"], "find_or_create": true }
The same behavior, using filters:
{ "job": { "script": "hash.py", "repository": "you/you", "script_version": "main", "script_parameters": { "input": "c1bad4b39ca5a924e481008009d94e32+210" } }, "filters": [["script", "=", "hash.py"], ["repository", "=", "you/you"], ["script_version", "in git", "earlier_version_tag"], ["script_version", "not in git", "blacklisted_version_tag"]], "find_or_create": true }
Run the script “crunch_scripts/monte-carlo.py” in the repository “you/you” using the current “main” commit. Because it is marked as “nondeterministic”, this job will not be considered as a suitable candidate for future job submissions that use the “find_or_create” feature.
{ "job": { "script": "monte-carlo.py", "repository": "you/you", "script_version": "main", "nondeterministic": true, "script_parameters": { "input": "c1bad4b39ca5a924e481008009d94e32+210" } } }
Delete an existing Job.
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
uuid | string | The UUID of the Job in question. | path |
Gets a Job’s metadata by UUID.
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
uuid | string | The UUID of the Job in question. | path |
List jobs.
See common resource list method.
See the create method documentation for more information about Job-specific filters.
log_tail_follow jobs
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
uuid | string | path | ||
buffer_size | integer (default 8192) | query |
Get the current job queue.
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
order | string | query | ||
filters | array | query |
This method is equivalent to the list method, except that the results are restricted to queued jobs (i.e., jobs that have not yet been started or cancelled) and order defaults to queue priority.
Update attributes of an existing Job.
Arguments:
Argument | Type | Description | Location | Example |
---|---|---|---|---|
uuid | string | The UUID of the Job in question. | path | |
job | object | query |
The content of this documentation is licensed under the
Creative
Commons Attribution-Share Alike 3.0 United States licence.
Code samples in this documentation are licensed under the
Apache License, Version 2.0.