Skip to Page Content

About Us

Proposal Writing Information

Georgia Advanced Computing Resource Center Useful Information for Proposal-writing

The text below is intended to provide most of the information your proposal will need, if it includes reference to the GACRC for support of HPC resources or consulting in the use of HPC resources for your proposed research. We do not envision that each proposal will need all information. Please contact GACRC staff (gacrc@uga.edu) for assistance in crafting language for your specific proposal, if required.

If your grant proposal includes the use of computing resources to be under the stewardship of the GACRC, whether you intend to purchase new hardware or software or use what is already available to you, the GACRC can assist you in the following ways (given adequate lead time):

  • Writing cost justifications and Management Plan(s) for IT resources
  • Preparing an IT budget
  • Soliciting budget proposals from prospective hardware, software and/or service providers
  • Conducting final negotiations with IT hardware, software and/or service providers before procurement

Brief GACRC Overview (Typical Grant Proposal Text)

The Georgia Advanced Computing Resource Center's (GACRC) equipment is located in UGA's Boyd Data Center (BDC). The GACRC has a fulltime staff of six Systems Administrators and Scientific Computing Consultants, specializing in Linux/UNIX system administration, storage administration, and scientific computing consultation. Two Linux clusters are available with a total core count of approximately 10,000 compute-cores. In addition to conventional compute nodes, each cluster has several large memory and GPU specific nodes. High-performance storage for the Linux clusters is provided for users' home directories and temporary scratch space. Slower storage resources are available for long-term project needs.

The computational and storage resources are available free of charge to UGA researchers and students. A Faculty Buy-In program is also in place which provides prioritized access to the GACRC-administered computational resources.

The GACRC regularly hosts training sessions on a number of subjects relevant to the use of its computational and storage resources. Prospective UGA users are required to attend an introductory session before being granted access to any of the GACRC compute resources. 

The GACRC manages over 600 software packages, utilities, compilers and libraries across both clusters. Of these over 450 are bioinformatics related. 

 

Additional Details (As Needed)

Major Equipment: The computational resources at the GACRC encompass 2 Linux clusters as well as a storage environment that serves both.

The newest Linux cluster managed by the GACRC presents the following resources

  • 4x management nodes, in a redundant pair configuration, running a cluster management environment. Submission of jobs is managed by Adaptive Computing’s Moab HPC Suite queuing and scheduling environment.
  • 112x general-purpose (GP) compute nodes with the following configuration:
    • 4x AMD Opteron processors with 12 cores each, for a total of 48 cores
    • 128 GB of RAM
    • 256GB SSD drive for local scratch
    • QDR InfiniBand HCA & 2x 1GigE links
  • 4x high-memory compute nodes, same configuration as (GP) but with 256GB of RAM
  • 6x high-memory compute nodes, same configuration as (GP) but with 512GB of RAM
  • 1x high-memory compute node, same configuration as (GP) but with 1TB of RAM
  • 4x high-memory compute nodes with the following configuration:
    • 2x Intel Xeon E5-2680v4 processors with 14 cores each, for a total of 28 cores
    • 1TB of RAM
    • 256GB SSD drive for local scratch
    • QDR InfiniBand HCA & 2x 1GigE links
  • 2x multi-GPU compute nodes with the following configuration:
    • 2x Intel Xeon processors with 8 cores each, for a total of 16 cores
    • 128 GB of RAM
    • 256GB SSD drive for local scratch
    • QDR InfiniBand HCA & 2x 1GigE links
    • 8x NVIDIA Tesla K40 GPU cards
  • a QDR InfiniBand inter-nodal fabric arranged in a 2:1 fat tree configuration. Switches are 2x Intel TrueScale 12800-120 and 14x Intel TrueScale 12300
  • 8x 48-port Brocade Gigabit Ethernet switches each with 6-port 10GigE uplinks for the management network as well as external access

Additionally, over 60x compute nodes in various configurations, have been purchased through the Faculty Buy-In program, 

The penultimate cluster (named zcluster) deployed by the GACRC over the period 2009-2012 is comprised of an assembly of components sourced through different manufacturers:

  • 230x compute nodes (2,600 compute cores) typically 8-core & 8GB of RAM.
  • 6x 32-core, 64GB, 4x 8-core & 192GB, 10x 12-core & 256GB and 2x 32-core & 512GB high-memory compute nodes
  • 4x multi-GPU compute nodes, each with 8x NVIDIA Tesla K20x GPU cards
  • 1x NVIDIA Tesla S1070 with 1x GPU cards (4 x 240 = 960 GPU cores)
  • 1x NVIDIA Tesla (Fermi) C2075 GPU processor (448 GPU cores)
  • 9x NVIDIA Tesla (Fermi) M2070 GPU cards (9 x 448 = 4032 GPU cores). These cards are installed on 2 hosts each of which has dual 6-core Intel Xeon CPUs and 48GB of RAM; there are 6 GPU cards on one host and 3 on the other.
  • All of compute resources are interconnected using Brocade switches with 10GB inter-rack interconnection 

Data storage for the GACRC-supported clusters is currently provided by

  • 1x Panasas ActiveStor 12 storage cluster with 156TB usable capacity, running PanFS parallel file system, used for $HOME and scratch on zcluster. 
  • 1x Seagate (Xyratex) ClusterStor 1500 Lustre appliance with 480TB usable capacity, used for high-performance scratch on the new cluster.
  • 3x Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 99TB usable capacity, provisioned with 600GB 15krpm drives. Used for $HOME on the new cluster.
  • 2x Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 560TB usable capacity. This storage is used as an active project repository on both clusters.
  • A 720TB disk-based backup environment. The backup environment will serve the above-mentioned storage devices, except for the scratch space provisioned on both clusters.

Operation and Maintenance

The GACRC has a fulltime technical staff of seven, specializing in Linux/UNIX system administration, storage administration, scientific computing, in support of researchers using the GACRC-managed resources. The GACRC serves over 250 principal investigators and over 1,200 total users.

 

The GACRC has access to the following expertise:

  • HPC cluster computing system administration, including cluster design, operating systems, job scheduling software, network design and administration, operating system security;
  • Storage administration, including user data management, hardware troubleshooting, performance optimization, optimal availability, data security and subsystem design/configuration;
  • System integration and administration using programming and scripting for data conversions, data analysis, and data migration;
  • Software selection, installation, maintenance and troubleshooting, based on researchers' needs, open source solutions, commercial offerings;
  • Assistance in debugging of HPC parallel computing programs, offering consultation and assistance to researchers and their staff;
  • Consultation and training in the use of computational science tools and referential databases;