Most users on the PW platform will work exclusively with elastic clusters. These clusters are made up of a controller node and compute nodes, with the controller delegating tasks to the compute nodes.
Clusters have several adjustable parameters for both controller and compute nodes, such as compute instance types and node count. Additionally, compute nodes are grouped together in partitions, which have their own settings. For more information, see Partition Settings below.
The PW platform also supports an optional parallel filesystem, Lustre. For more information on setting up Lustre for your account, see Configuring Storage.
Accessing Configuration Settings
You can access a resource’s configuration settings from the Home page. Navigate to the Computing Resources module and click the gear icon for the resource you want to configure.
Alternatively, you can navigate to the Compute page and click the name of the resource you want to configure.
About the Resource Configuration Page
When you navigate to a cluster's configuration settings, there are four tabs for customization.
By default, you’ll see the Sessions tab when you navigate to configuration settings. This tab shows your previous cluster sessions well as provisioning and deletion logs.
In the Sessions module of this tab, you’ll also be able to see sessions for any attached ephemeral storage options. If multiple ephemeral storage options are attached to the cluster, you’ll see a dropdown to select when ephemeral storage logs you’d like to see. The deletion logs for ephemeral storage options are combined with the cluster deletion logs.
For more information, please see About Storage Types.
Here, you can adjust your cluster's parameters. For more information, see General Settings below.
This tab shows the code version of your resource’s configuration settings. Here, you can manually adjust the parameters seen in the Definition tab.
This tab shows the resource name, description, display name, and tags that were entered when the resource was created. Here, you can adjust those settings and upload a new thumbnail for the cluster.
You can also enable automated alert emails from
firstname.lastname@example.org by clicking the Enable run time alert toggle button. The field for Alert Interval (Hours) will appear, and the value you enter here determines how often you'll receive run time alerts.
This tab lets you share your resource with other users in your organization who are assigned to the same group.
Please note that your group name(s) will be specific to your organization. For more information, see Group below.
Clusters will typically have these settings in the Definition tab of the configuration page. If you're using an existing on-premise cluster, see Existing Cluster Settings below.
Use this dropdown menu to select the account that your organization uses for a specific cloud service provider.
By default, this menu will show the resource account that your organization has selected for that type of cluster (for example, the screenshot above shows a Google cluster, and Resource Account was automatically populated with the Pworks GCP account).
Use this dropdown menu to select the group name that your organization uses to allocate costs. This menu is especially important if your organization is running multiple groups simultaneously.
If you’re not sure which group to select, you can contact us or your organization’s PW platform administrator.
Use this toggle button to automatically create a home directory for all users in the selected group.
This option is different from the Sharing tab, which allows sharing the resource definition with other users in your organization.
Access Public Key
Use this text box to add an SSH key so you can access the cluster from a remote device, like your local laptop.
Please note that keys must be in OpenSSH format and you should only enter a public key, not a private key.
For more information on how to use a public key, see Logging In to the Controller.
Optionally, you can set scripts to execute when you start a cluster.
Use this text box to set a script that executes once a controller node has started. For example, you can set files to automatically move into a specific folder.
Use this text box to set a script that runs a health check on a controller node. When the script is done running, you’ll see any error codes in red or an exit code of
0 in green if there are no errors.
For more information, see Health Checks (coming soon).
These settings define the configuration for the controller node, such as region, instance type, and OS image. Some settings will differ depending on which type of resource you’re using. For more information, see CSP-Specific Settings below.
Use this dropdown menu to select the region that your cluster will deploy computing resources into.
A region represents a geographic area.
Use this dropdown menu to select the zone to use for the controller.
A zone refers to an isolated location inside a region.
Azure clusters do not have a Zone menu.
Use this dropdown menu to select the instance type of the controller. The instance type determines the CPUs and amount of memory available on the machine. Certain instance types may also have specialty hardware, such as GPUs or low-latency networking options.
For more information about instance types and what they mean, please see Choosing Instance Types.
Use this dropdown menu to select the operating system (OS) image for the cluster's controller node. We recommend using the latest version because this will ensure you have the most up-to-date software on your cluster; the latest image version includes OS updates and software required to connect to the PW platform.
Image Disk Name (Fixed)
If your organization uses a snapshot of a disk, this field will identify that snapshot.
For example, your organization may have specific applications that users need to complete their work. Your administrator may create a snapshot of a disk to make those apps available to users whenever a cluster starts, and the name of that snapshot would be in Image Disk Name.
Please note that if you make any changes to this directory while on the cluster, those changes will be lost when you turn the cluster off. Your changes will not affect the snapshot, or other users’ work.
Use this field to enter the number of image disks you’ll need for the cluster. Typically, you’ll either enter
1 if you need the directory from Image Disk Name or
0 if you would like to disable the image disk.
Image Disk Size GB
Use this field to enter the amount of storage on your image disk. The size depends on the size of the snapshot, and should be provided by your organization's administrator.
You can create partitions in clusters to send your work to differently configured sets of worker nodes. Partitions are especially useful if you’re working on a project that needs more or fewer nodes for specific tasks (for example, if you were running a simulation model and only a small dataset required twice the amount of GPU power to render properly).
You must have at least one partition in your cluster.
If you click + Add Partition, a list of new settings will expand. Typically, a partition will have the following configuration options. Some settings will differ depending on which type of resource you’re using. For more information, see CSP-Specific Settings below.
Use this field to name your partition. Be sure to use a unique name for each partition you create. Your partition should never be named
Use this dropdown menu to select the configuration of the partition. These options work in the same way that the controller instance types do.
Use this field to enter the max number of nodes in a partition.
Use this toggle button to specify whether a partition is the default location for running jobs. For more information on running jobs on specific partitions, see Submitting Jobs.
This feature is important if you create multiple partitions. If you only create one partition, it will automatically be set to Default and cannot be changed, as shown in the screenshot above.
Use this toggle button to specify whether a partition is a spot instance. Spot instances can be cost effective because they make use of resources that are already available but currently unused.
However, spot instances can be disrupted because another user can take over that available resource at any time. For this reason, we recommend using spot instances at your own risk.
Use this dropdown menu to select the operating system image for the partition. We recommend using the latest version.
Use this dropdown menu to select the zone within your selected region.
You can configure your partition to run in a different zone than your controller node. Selecting different zones on multiple partitions increases the chance provisioned resources will be available from the cloud provider. There is a performance penalty if compute nodes need to communicate across zones.
The PW platform uses Slurm to manage jobs on controller and compute nodes. The settings below determine how Slurm behaves for your cluster's nodes.
Please note that numerical values you enter in these fields are measured in seconds.
Use this field to set how long Slurm will wait before shutting down idle nodes. This field is set to
300 by default.
Use this field to set the maximum amount of time Slurm will try to start nodes. If the nodes don’t start by the end of the set time, Slurm will end the initialization attempt. This field is set to
1200 by default.
Use this field to set how long Slurm will wait to make nodes available again after shutting them down. This field is set to
300 by default.
Return To Service
Use this dropdown menu to select when down nodes are returned to service.
Non Responsive option means that down nodes will become available only if they were set to down because they were non-responsive.
Any Reason option means that down nodes will become available if they were set to down for any reason, including low memory, an unexpected reboot, or being non-responsive.
This field is set to
Non Responsive by default.
Attached Storage Settings
Use this section to attach any of your configured storage options. For more information, please see Attaching Storage.
Each cloud service provider (CSP) builds and configures their resources differently. Clusters on the PW platform have settings that correspond to each CSP’s model of cloud services. The CSP-specific parameters are outlined below.
Please note that these CSP-specific settings will also appear as options inside the partition settings on clusters.
Use this toggle button to enable Elastic Fabric Adapter (EFA), which improves inter-instance network performance. EFA is useful if you need to scale HPC or machine-learning applications to thousands of CPUs or GPUs.
Please note that EFA is not supported on all instance types.
For more information and a list of supported instance types, see the AWS documentation on EFA.
Use this field to enter the name of a network filesystem (NFS), which is an existing system on an external device that’s available for read and/or write access on your cluster.
If you want to set up an NFS, please contact us or your PW platform administrator.
Use this field to enter the size of your NFS.
Please note that the values for NFS Size and Image Disk Size must be the same.
Use this toggle button to enable accelerated networking, which improves networking performance for large workloads on multiple cloud clusters.
For more information, see the Azure documentation on accelerated networking.
Use this toggle button to enable Google Virtual Network Interface Card (gVNIC), which supports higher network bandwidths from 50–100 Gbps.
Please note that gVNIC is not supported on all instance types.
For more information and a list of supported instance types, see the Google documentation on gVNIC.
Use this toggle button to enable Tier_1, which increases maximum egress bandwidth (upload speed) to 50–100 Gps, depending on the size of the instance. If Tier_1 is off, the egress bandwidth will range from 10–32 Gbps.
Please note that Tier-1 is only supported if gVNIC is also active. If you try to start Tier-1 by itself, the PW platform will display the error message Tier_1 is only supported if gVNIC is on.
For more information, see the Google documentation on Tier_1.
Migrate On Maintenance
This toggle button enables live migration whenever the virtual machine’s host undergoes maintenance, meaning that Google will migrate the virtual machine to another host without any downtime.
Please note that GPU and spot instances cannot be live migrated. When supported, we recommend turning this feature on.
For more information, see the Google documentation on live migration.
Existing Cluster Settings
Typically when you create an existing cluster, you’ll be connecting to an on-premise cluster associated with your organization. The settings that are specific to this type of cluster are outlined below. If you’re unsure what to choose for these options, contact your organization’s PW platform administrator.
General Settings for Existing Clusters
Use this dropdown menu to select how the PW platform will connect to the existing cluster.
Default options means that the platform will try to SSH to the cluster by using only your PW account’s SSH key, which is stored in
~/.ssh/pw_id_rsa. For more information about your PW SSH key, see our documentation.
PIN or Password Only option creates a dialog box when you start the cluster, where you can enter the password for your user account (the account you define in Username). This options means that the platform will connect to the cluster using only these credentials.
Use this toggle button if you’re connecting to a cluster that has MFA enabled. When you turn on the resource and MFA is enabled, a dialog box will appear, prompting you to enter your MFA code.
This button is different from the options in Resource Account; if you toggle MFA on, the platform will connect to the existing cluster using both the SSH key in your PW account and your MFA credentials.
Use this toggle button if you’re connecting to a cluster that has a jump node enabled. A jump node—also called a host node, bastion node, or login node—is a high-security server that allows a user to access a private machine or network.
If you enable this feature, two new fields will appear for Jump Node User and Jump Node Host. Your organization will have these credentials if you need them.
Use this dropdown menu to select the group name that your organization uses to allocate costs. This menu is especially important if your organization uses multiple groups.
If you’re not sure which group to select, you can contact us or your organization’s PW platform administrator.
Cluster Configuration Settings for Existing Clusters
Use this field to enter the username assigned to you for this cluster.
On existing clusters, you can enter
__USER__ into any box and the PW platform will automatically substitute your username for that field. For example, if your username is
jdoe, the PW platform will automatically substitute
jdoe in the Working Directory field.
Cluster Login Node
Use this field to enter the IP address or host name of the cluster.
Use this field to enter the directory you’ll be accessing while completing work on this cluster. By default, this field is set to
Use this field to enter the maximum number of compute nodes you need to complete your work.
Internal IP/Network Name (Optional)
Use this field to specify the internal IP address or network name that the compute nodes use to communicate with the controller node(s).
You only have to use this feature if your organization has configured the cluster’s compute nodes to send information to an IP address other than the controller’s default IP address. You can run the command
ifconfig on a cluster after logging in to the controller to see all of the available IP addresses.
Use this dropdown menu to select the type of job scheduler the cluster uses. Currently, the Existing Cluster resource type supports Slurm and PBS.
If you’re simply testing resources or if your organization has not provided specific configuration settings for your group, we recommend using the default configuration settings because they allow resources to run most projects with optimal performance.
To set or reset a resource to the default pool configuration, navigate to the Compute page and click the name of the resource you want to edit. The configuration page will open. Click the Restore Configuration button.
After you click Restore Configuration, a dialog box will appear with the message Restore configuration to default values?
From the dropdown menu in the dialog box, click the configuration labeled Benchmark. Click Restore. Click Save Resource.
Existing clusters do not have a Restore Configuration button.
When you use the default configuration settings, the cluster will automatically be equipped with a Lustre filesystem. Lustre is powerful, but generally increases costs significantly on Google and Azure clusters. Please feel free to turn Lustre off if you don’t need it, particularly if you’re transferring small files or simply testing a resource.