Every organization on Civis Platform has dedicated compute partitions, or collections of Elastic Cloud Compute (EC2) instances, that execute jobs, notebooks and services on Civis Platform. You can view the status of your organization’s partitions in the Platform Usage Overview.
Default Configuration
The standard configuration of compute partitions for an organization is to have two partitions:
- Jobs Partition: runs all Python, R and Container scripts, as well as all Custom scripts built on Python, R, and Container backing scripts.
- Notebooks/Services Partition: runs all notebooks (Interactive coding environments in Python or R) and services (User defined applications, an R Shiny application for example)
Custom Configuration
In some cases, Admins may want to consider a custom configuration for their organization’s compute partitions. For example,
- Providing dedicated compute for production pipelines, separate from ad-hoc work, to ensure mission-critical workflows are available for the organization’s downstream work.
- Providing separate compute resources for different teams in order to better organize their work, triage conflicting priorities, and ensure no one team can consume all of an organization’s resources
- Adding an additional instance size for performing more RAM or CPU-heavy tasks
Civis Platform supports Custom Partition configurations by setting default partitions for each job type at the group level. As a result, different user groups on Platform can be configured to use different partitions. For further control, default partitions may be overwritten on individual jobs. For more information and to configure custom partitions for your organization, have your organization administrator reach out to Client Success.
Custom Instance Sizes
Civis's default configuration includes instances with 4 CPUs and 16GB RAM. These instances are capable of performing many standard tasks, but some workloads require additional memory or compute. Civis has various instance size options that can be leveraged in addition to your standard compute nodes, which are documented below, along with their corresponding normalized billing hours.
For GPU enabled instances, only one workload can access the GPU at one time so you may want to set the resource options of your job to the maximum to ensure that the entire instance is allocated to your workload to prevent conflicts.
Instance Size | Normalized Hrs | CPU Cores(VCPU) | RAM (MB) | GPU |
xlarge | 1 | 4 (4096) | 15320 | |
2xlarge | 2 | 8 (8192) | 31641 | |
4xlarge | 4 | 16 (16384) | 64283 | |
12xlarge | 11.52 | 48 (49152) | 194850 | |
18xlarge | 15.3 | 72 (73728) | 145887 | |
GPU 2xlarge | 6.06 | 8 (8192) | 31641 | 1 (24GB VRAM) |
Comments
0 comments
Please sign in to leave a comment.