Our strength lies in our platform diversity. Regardless of whether big memory or lots of CPUs or Linux or Microsoft Windows or even auto-parallelization is required, we probably have a system to address your needs. Our resources also comply to FISMA standard.
Our first supercomputer was brought online in May of 2009. Argon is a traditional Rocks 5.4 cluster running CentOS 5.5 to achieve 5.45 Tflops (trillion floating point operations per second) theoretical peak performance. Argon is appropriate for situations where a problem can be split up between numerous processors and where the researcher is comfortable with writing or modifying code to take advantage of parallel processing.
- 512 computing cores (2.66 GHz) on 64 nodes
- 1 TB compute-node memory (16GB per node)
- 64TB local scratch (1 TB per node)
- 40TB Lustre Filesystem
- 20Gbps DDR Infiniband
Based on ScaleMP’s vSMP software, Boron runs CentOS 6.2, has a high memory to CPU ratio, and is ideal for researchers who are not familiar with writing parallel code but still have the need for parallelization. The vSMP platform itself handles parallelization of straight or conventional code, allowing researchers to spend more time focusing on their actual research and less time debugging code.
- 192 computing cores (3.2 GHz) on 16 nodes
- 768GB memory (48GB per node)
- 60TB local scratch (4TB per node)
- 40bps QDR Infiniband
Our Microsoft Windows HPC Server 2008 R2 cluster serves to address the needs of the multitude of users who are bound to Microsoft platforms and applications. It is the first cluster of its kind within the Arkansas research community and provides a distinct flexibility to the CRC that sets us apart from other regional HPC facilities.
- 180 computing cores (2.66 GHz) on 15 nodes
- 720GB memory (48GB per node)
- 60TB local sctarch (4 TB per node)
- 20Gbps DDR Infiniband
X9000 storage array available through both Ethernet and 40Gbps QDR Infiniband.
- 192TB raw disk space
- 125TB useable storage
Erbium, our big memory machine by HP, is the most tightly integrated stand‐alone computer in the world as of November 2012! It is the perfect solution for problems involving very large matrices, databases, or datasets. For example, loading an entire genome into memory at one time enables an entirely new way of working in this field, and affords researchers the ability to focus on the subject at hand without the distractions and tedium normally associated with breaking up sequences, analyzing said partial sequences, and dovetailing results. Another example is performing data entity resolution on truly large datasets without having to artificially break up the sets, without the performance hits related to working with swap space, and without having to worry about missing information that crosses over between points of dataset demarcation.
- 80 processors (160 hyperthreaded)
- 4TB memory (in a single node)
- Intel MKL
- High-Performance Linpack (HPL)
- Gaussian G03 and G09