· Raijin
at National Computational Infrastructure (NCI),
ANU
Raijin, named after the Shinto God of thunder, lightning and storms, is a Fujitsu Primergy high-performance, distributed-memory cluster, procured with funding from the Australian Government, through its Super Science Initiatives, under the EIF Climate HPC Centre Funding Agreement between the Commonwealth of Australia and the Australian National University.
The system was installed in 2012 and entered production use in June 2013. It comprises:
The unit of shared memory parallelism is the node, which comprises dual 8-core Intel Xeon (Sandy Bridge 2.6 GHz) processors, i.e., 16 cores.
The memory specification across the nodes is heterogeneous in order to provide a configuration capable of accommodating the requirements of most applications, and providing also for large-memory jobs. Accordingly:
More information about Raijin:
http://nci.org.au/systems-services/national-facility/peak-system/raijin/
· Orange,
at Intersect Australia Ltd, Sydney, NSW
Orange was commissioned in early 2013. The SGI 30+ TFlop distributed memory cluster
features 103 cluster nodes with 1660 cores powered by the Intel® Xeon®
E5-2600 processor series. It has 200TB of local scratch disk space and 101TB of
usable high speed shared storage in a parallel file system.
The SGI HPC cluster is
comprised of 13 large memory (256 GB) compute nodes and 90 normal (64GB)
compute nodes, each having dual Intel Xeon E5-2600 8-core processors. System
software provided includes SGI Management Center, SGI Performance Suite, PBS
Pro Scheduler and SUSE® Linux Enterprise Server operating system.
The clusters are connected
with QDR InfiniBand® Non-blocking Interconnect technology. A single
administration node and a system console are also provided. Storage
capabilities consist of an SGI NAS Storage Server and a Panasas® ActiveStor™
PAS-12 parallel file system.
More information about Orange: http://www.intersect.org.au/time/orange-handbook
·
Artemis, at University of Sydney
Artemis was
commissioned in April, 2015. It has
compute capacity consisting of the cores powered by Intel Xeon E5-2680
V3 shown in the table below:
Node
Type |
Quantity |
Cores
Per Node |
Total
Cores |
Standard Compute
Node |
56 |
24 |
1344 |
High Memory
Compute Node |
2 |
24 |
48 |
GPU Compute Node |
5 |
24 |
120 |
|
|
|
1512 |
Each standard compute node has 4GB RAM per core
and 2Tbytes disk storage. Each high memory compute node has at least 512GB
RAM and 4 Tbytes disk storage. Global data storage is 200TB at minimum. All
nodes and data storage system are connected by the 56Gb/s FDR non-blocking
InfiniBand
Website for Artemis: http://sydney.edu.au/research_support/hpc/index.shtml
To be eligible to use Artemis,
-
You must have a unikey
-
You must have an approved RDMP (research data
management plan) in place
For
more information about RDMP, please go to the webpage,
http://sydney.edu.au/research_support/data/data-management-planning.shtml
The above link will also lead you to the page to create your RDMP. Once your RDMP is approved, ICT will set up an account on Artemis for you.
· Silica, at Faculty of Science, Univ. of Sydney
Silica is SGI Altix XE1200 Cluster System with 74 compute nodes, each containing 2 quad core Intel Xeon processors (ie. each having 8 cores running at 2.66 GHz) and 16GB memory (RAM). It is made up of 600 cores in total. Apart from 250GB local hard disc on each node, Silica has 4.5TB global storage for users to store their data. Silica was commissioned in 2008
Silica users are from the schools in Faculty of Science. The School of Physics is the biggest sponsor to Silica. The process to apply for the account can be found in this URL:
http://www.physics.usyd.edu.au/~wheat/silica.html
Details about Silica, such as its guide of usage and installed software can be found at this URL:
http://www.hpcf.chem.usyd.edu.au/index.html
· physics
cluster, headnode in School of Physics
The cluster was commissioned in mid-2011 and the system was upgraded to RHEL7 in August 2015. It runs 21 (Dell PowerEdge R410) nodes with 12x21=252 cores powered by Intel Xeon X5650 over a Giga-bit private network. Each node has 32GB memory and 32GB disk space. It is open to all users in the school. If you want an account, lodge a request to IT support. The webpage for more information is
http://www.physics.usyd.edu.au/restricted/itsupport/headnode/
and the user’s manual is linked here.
If you need further help with these HPCFs, please contact
Sue Yang (email: xue.yang@sydney.edu.au and phone: 93516081).