# HPC Cluster Contribute Nodes Tufts HPC cluster operates under a hybrid model with both public and contribute nodes. The university operates a public partition where the Slurm fair share algorithm is used to balance usage amongst all researchers. Additionally, researchers can purchase contribute nodes to which they get priority access. This page lists the nodes types utilized on the cluster available for purchase as contribute nodes. TTS Research Technology works to create a selection of options that match to faculty needs while maintaining a cluster that has enough commonality that the researcher community can utilize free resources and maximize utilization. If all the nodes are unique it does not create a good HPC environment. Contribute nodes are available to purchase 4 times per year. This helps us achieve the best price for equipment and allows us to manage the cluster growth in a planned fashion. If you are contemplating purchasing equipment, please reach out to RT early in your planning process. Please review the [access and lifecycle information for faculty purchased nodes](index.md#hpc-researcher-contribution-node) before submitting a purchase request. Most current faculty contribute nodes are the **CPU, Standard** or **GPU, Standard** nodes. **CPU Nodes:** | **Name** | **System Specifications** | **Approximate Cost [^1]** | | ------------- | ------------------------- | ------------------------- | | CPU, Small | 2x 32 Cores
256GB RAM | \$17,000 [^2] | | CPU, Standard | 2x 32 Cores
512GB RAM | \$21,000 [^2] | | CPU, Large | 2x 32 Cores
1TB RAM | \$28,000 | **GPU Nodes:** | **Name** | **System Specifications** | **GPU Specifications** | **Approximate Cost [^1]** | | ------------- | -------------------------------------------------- | ---------------------- | ------------------------- | | GPU, Standard | 2x 32 Cores
512GB RAM
| 4x L40S 48GB PCIe | \$60,000 | | GPU, Large | 2x 48 Cores
1.5TB RAM
24TB Local NVMe Disk | 8x H200 SXM 141GB | \$350,000 | - TTS provides all the additional infrastructure including data center space, networking, power/cooling and operations. - We utilize Intel CPUs and NVIDIA GPUs across the cluster to maintain code portability between nodes. - All nodes are connected at 100 Gbps Ethernet - If you have specialized needs such as InfiniBand networking or alternate CPU architecture like ARM please reach out to RT. [^1]: Approximate prices are estimated based on previous orders, server costs are driven by commodity prices and are very volatile. The prices can be expected to remain relatively proportional between the types though. [^2]: Price based on buying four nodes.