See the building workflows page if you have not already. If you are looking for the documentation for a specific action, find the name of the action from the sidebar on the right.
Certain operations that are used regularly in workflows have been abstracted into actions, and can be executed in a workflow using the following syntax:
jobs:
job-name:
steps:
- uses: parallelworks/<name of action>
with:
action_input_2: value_1
action_input_2: value_2Note that valid action inputs passed to the action through the with property depend on the action. Below is a list of actions and their input fields.
checkoutChecks out a git repo. The example below is the simplest possible usage:
jobs:
job-name:
steps:
- uses: parallelworks/checkout
with:
repo: https://github.com/parallelworks/interactive_session.git
branch: mainIf you want to only check out part of a repo, you can define sparse_checkout:
jobs:
job-name:
steps:
- uses: parallelworks/checkout
with:
repo: https://github.com/parallelworks/interactive_session.git
branch: main
sparse_checkout:
- utils
- platformsTo clone the repo to the file system of a cluster, use the ssh field at the job or step level to specify the cluster.
jobs:
job-name:
steps:
- uses: parallelworks/checkout
with:
repo: https://github.com/parallelworks/interactive_session.git
branch: main
ssh:
remoteHost: ${{ inputs.cluster.ip }}
on:
execute:
inputs:
cluster:
type: compute-clusters
optional: falseInputs:
repo string: Specifies the git repo to be cloned. Required.branch string: Specifies the branch to be cloned. Required.sparse_checkout string[]: Only clone files that match the patterns defined in the array.path string: Where to clone the git repo to. Defaults to the workflow run directory.cancel-jobsCancels a workflow run, or a set of jobs in a workflow run. Here is an example:
permissions:
- '*'
jobs:
main:
steps:
- run: sleep 2
- uses: parallelworks/cancel-jobs
with:
jobs:
- j1
j1:
steps:
- run: sleep 200
early-cancel: any-job-failed
j2:
steps:
- run: sleep 200
early-cancel: any-job-failedNote that the running workflow does not have to be the same as the workflow to be canceled.
Inputs:
jobs string[]: Specifies the jobs to be canceled. Defaults to all jobs.workflow string: Specifies the workflow to be canceled. Defaults to the current workflow. Should be used only if you want to cancel a run of another workflow (not the workflow in which the cancel action is defined).run int: Specifies the run to be canceled. Defaults to current if workflow is undefined, required if workflow is defined.slug string: Specifies the slug of the run to be canceled; an alternative to workflow and run. Defaults to the current slug. Should be used only if you want to cancel a different run than the one running the cancel action.Note that there is slightly different behavior if neither slug nor workflow is passed. This is deliberate, to ensure support for subworkflows. If you want to cancel jobs in a workflow that is to be used as a subworkflow in another workflow, make sure that you do not pass slug or workflow as an argument using with. This will ensure that the action correctly cancels the job or jobs in the subworkflow rather than attempting to cancel the jobs in the superworkflow. In general, the slug or workflow field should only be used when the desired value is not the default.
update-sessionUpdates a session with information. If you want to use sessions, this action is always necessary because sessions have no initial information on what to display. See Building Sessions for example usage.
Shared Inputs:
name string: Name of the session to update. Must be present in sessions property of YAML. Required.type string: Type of session, must be either 'link' or 'tunnel'. Default is tunnel.target string: Id of the target compute cluster (get using an expression like ${{inputs.cluster_input_name.id}}). Defaults to user workspace.status string: Status of the session. For tunnel sessions, defaults to 'creating' (or 'running' if the target is user workspace). For link sessions, defaults to 'running'. Can be set to 'running' to indicate the session is ready immediately, but this is optional as the platform will detect readiness automatically.Tunnel Inputs:
remotePort int: Port to read on the target. Required for tunnel sessions.remoteHost string: Host of the target. Default is localhost.slug string: Added at the end of session url. Defaults to empty string.localPort int: Port on the local machine. Defaults to a random open port.openAI boolean: Whether the session is an OpenAI session. Default is false.Link Inputs:
url string: The url of the link for the session.scheduler-agentYou can use the scheduler-agent action to provision a node and start an ssh server so that you can run commands on it through ssh. Note that you must use a cluster with a partition for this to properly work. See Building Sessions for example usage.
Inputs:
wait boolean: Whether to wait for the node to provision. Defaults to true.scheduler-type string: Use this scheduler. Defaults to slurm, options are slurm and pbs.scheduler-flags object: Use these flags for the sbatch command. Defaults to none. You can find a list of flags here under job submission for slurm and here for pbs.script-headers string: Additional lines to add to the top of the bash script being submitted via sbatch or qsub (depending on your scheduler). Useful if you want to allow users to add their own #SBATCH/PBS comment flags at will.wait-for-agentThis action should only be used if you already used scheduler-agent with wait: false passed as an input to the with property, which might be a good idea if you want the workflow job to perform other operations while the scheduler-agent node is configuring.
permissions:
- '*'
sessions:
session:
jobs:
main:
ssh:
remoteHost: ${{ inputs.resource.ip }}
steps:
- uses: parallelworks/scheduler-agent
id: slurmstep
with:
wait: false
- run: echo "Perform some other operations here, like a checkout action or second scheduler-agent"
- uses: parallelworks/wait-for-agent
id: waitstep
with:
agentId: ${{ needs.main.steps.slurmstep.outputs.agentId }}
- name: Get open port
ssh:
jumpNodeHost: ${{ inputs.resource.ip }}
remoteHost: ${{ needs.main.steps.waitstep.outputs.remoteHost }}:${{ needs.main.steps.waitstep.outputs.sshPort }}
run: |
echo sessionPort="$(pw agent open-port)" >> $OUTPUTS
cat $OUTPUTS
- uses: parallelworks/update-session
with:
status: running
name: ${{ sessions.session }}
target: ${{ inputs.resource.id }}
remoteHost: ${{ needs.main.steps.waitstep.outputs.remoteHost }}
remotePort: ${{ needs.main.outputs.sessionPort }}
- name: Serve port
run: |
cat << EOF > myServer.go
package main
import (
"fmt"
"net/http"
)
func hello(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "Hello from slurm agent server!\n")
}
func main() {
http.HandleFunc("/", hello)
http.ListenAndServe(":${{ needs.main.outputs.sessionPort }}", nil)
}
EOF
go run myServer.go
ssh:
jumpNodeHost: ${{ inputs.resource.ip }}
remoteHost: ${{ needs.main.steps.waitstep.outputs.remoteHost }}:${{ needs.main.steps.waitstep.outputs.sshPort }}
'on':
execute:
inputs:
resource:
label: Resource Target
type: compute-clusters
autoselect: true
optional: falseInputs:
agentId string: The id of the agent to wait for. The output from the scheduler-agent action includes this field (see usage in example above).