High Performance Computing Facilities (HPCFs) available for school researchers at 5 levels are briefly summarized as below.

 

·       Raijin at National Computational Infrastructure (NCI),  ANU

 

Raijin, named after the Shinto God of thunder, lightning and storms, is a Fujitsu Primergy high-performance, distributed-memory cluster, procured with funding from the Australian Government, through its Super Science Initiatives, under the EIF Climate HPC Centre Funding Agreement between the Commonwealth of Australia and the Australian National University.

The system was installed in 2012 and entered production use in June 2013. It comprises:

 

The unit of shared memory parallelism is the node, which comprises dual 8-core Intel Xeon (Sandy Bridge 2.6 GHz) processors, i.e., 16 cores.

The memory specification across the nodes is heterogeneous in order to provide a configuration capable of accommodating the requirements of most applications, and providing also for large-memory jobs. Accordingly:

More information about Raijin:

 

http://nci.org.au/systems-services/national-facility/peak-system/raijin/

 

·       Orange, at Intersect Australia Ltd,  Sydney,  NSW

Orange was commissioned in early 2013.  The SGI 30+ TFlop distributed memory cluster features 103 cluster nodes with 1660 cores powered by the Intel® Xeon® E5-2600 processor series. It has 200TB of local scratch disk space and 101TB of usable high speed shared storage in a parallel file system.  

The SGI HPC cluster is comprised of 13 large memory (256 GB) compute nodes and 90 normal  (64GB) compute nodes, each having dual Intel Xeon E5-2600 8-core processors. System software provided includes SGI Management Center, SGI Performance Suite, PBS Pro Scheduler and SUSE® Linux Enterprise Server operating system.  

The clusters are connected with QDR InfiniBand® Non-blocking Interconnect technology.  A single administration node and a system console are also provided. Storage capabilities consist of an SGI NAS Storage Server and a Panasas® ActiveStor™ PAS-12 parallel file system.

More information about Orange: http://www.intersect.org.au/time/orange-handbook

 

·       Silica,  at Faculty of  Science, Univ. of Sydney

 

Silica is SGI Altix XE1200 Cluster System with 74 compute nodes, each containing 2 quad core Intel Xeon processors (ie. each having 8 cores running at 2.66 GHz) and 16GB memory (RAM).  It is made up of 600 cores in total.  Apart from 250GB local hard disc on each node, Silica has 4.5TB global storage for users to store their data.  Silica was commissioned in 2008

 

Silica users are from the schools in Faculty of Science.  The School of Physics is the biggest sponsor to Silica.  The process to apply for the account can be found in this URL: 

 

http://www.physics.usyd.edu.au/~wheat/silica.html

 

Details about Silica, such as its guide of usage and installed software can be found at this URL: 

 

http://www.hpcf.chem.usyd.edu.au/index.html

 

·       physics cluster,  headnode  in School of Physics

 

The cluster was commissioned in mid-2011 and the system was upgraded to RHEL7 in August 2015.  It runs 21 (Dell PowerEdge R410) nodes with 12x21=252 cores powered by Intel Xeon X5650 over a Giga-bit private network. Each node has 32GB memory and 32GB disk space.  It is open to all users in the school.  If you want an account, lodge a request to IT support.  The webpage for more information is

 

http://www.physics.usyd.edu.au/restricted/itsupport/headnode/ 

 

and the user’s manual is linked here.

 

If you need further help with these HPCFs, please contact

Sue Yang (email: xue.yang@sydney.edu.au and phone: 93516081).