Our strength lies in our platform diversity. Regardless of whether big memory or lots of CPUs or Linux or Microsoft Windows or even auto-parallelization is required, we probably have a system to address your needs. Our resources also comply to FISMA standard.
Our first supercomputer was brought online in May of 2009. Argon is a traditional Rocks 5.4 cluster running CentOS 5.5 to achieve 5.45 Tflops (trillion floating point operations per second) theoretical peak performance. Argon is appropriate for situations where a problem can be split up between numerous processors and where the researcher is comfortable with writing or modifying code to take advantage of parallel processing.
- 512 computing cores (2.66 GHz) on 64 nodes
- 1 TB compute-node memory (16GB per node)
- 64TB local scratch (1 TB per node)
- 40TB Lustre Filesystem
- 20Gbps DDR Infiniband
X9000 storage array available through both Ethernet and 40Gbps QDR Infiniband.
- 192TB raw disk space
- 125TB useable storage
Erbium, our big memory machine by HP, is the most tightly integrated stand‐alone computer in the world as of November 2012! It is the perfect solution for problems involving very large matrices, databases, or datasets. For example, loading an entire genome into memory at one time enables an entirely new way of working in this field, and affords researchers the ability to focus on the subject at hand without the distractions and tedium normally associated with breaking up sequences, analyzing said partial sequences, and dovetailing results. Another example is performing data entity resolution on truly large datasets without having to artificially break up the sets, without the performance hits related to working with swap space, and without having to worry about missing information that crosses over between points of dataset demarcation.
- 80 processors (160 hyperthreaded)
- 4TB memory (in a single node)
- Intel MKL
- High-Performance Linpack (HPL)
- Gaussian G03 and G09