Skip to Page Content

About Us

Proposal Writing Information

Georgia Advanced Computing Resource Center Useful Information for Proposal-writing

The text below is intended to provide most of the information your proposal will need, if it includes reference to the GACRC for support of HPC resources or consulting in the use of HPC resources for your proposed research. We do not envision that each proposal will need all information. Please contact GACRC staff (gacrc@uga.edu) for assistance in crafting language for your specific proposal, if required.

If your grant proposal includes the use of computing resources to be under the stewardship of the GACRC, whether you intend to purchase new hardware or software or use what is already available to you, the GACRC can assist you in the following ways (given adequate lead time):

  • Writing cost justifications and Management Plan(s) for IT resources
  • Preparing an IT budget
  • Soliciting budget proposals from prospective hardware, software and/or service providers
  • Conducting final negotiations with IT hardware, software and/or service providers before procurement

Brief GACRC Overview (Typical Grant Proposal Text)

The Georgia Advanced Computing Resource Center's (GACRC) equipment is located in UGA's Boyd Data Center (BDC). The GACRC has a fulltime staff of nine Systems Administrators and Scientific Computing Consultants, specializing in Linux/UNIX system administration, storage administration, and scientific computing consultation. One Linux cluster is available with a total core count of approximately 10,000 compute-cores. In addition to conventional compute nodes, the cluster has several large memory and GPU specific nodes. High-performance storage for the Linux clusters is provided for users' home directories and temporary scratch space. Slower storage resources are available for long-term project needs. The home directories, as well as the long-term project storage are backed-up to separate storage devices. 

The computational and storage resources are available free of charge to UGA researchers and students. A Faculty Buy-In program is also in place which provides prioritized access to the GACRC-administered computational resources.

The GACRC regularly hosts training sessions on a number of subjects relevant to the use of its computational and storage resources. Prospective UGA users are required to attend an introductory session before being granted access to any of the GACRC compute resources. 

The GACRC manages over 600 software packages, utilities, compilers and libraries. Of these over 450 are bioinformatics related. 

Additional Details (As Needed)

Major Equipment: The computational resources at the GACRC encompasses a Linux cluster as well as a storage environment that serves it.

The GACRC's latest Linux cluster presents the following resources:

  • 4x management nodes, in a redundant pair configuration, running a cluster management environment. Submission of jobs is managed by Adaptive Computing’s Moab HPC Suite queuing and scheduling environment.
  • 112x general-purpose (GP) compute nodes with the following configuration:
    • 4x AMD Opteron processors with 12 cores each, for a total of 48 cores
    • 128 GB of RAM
    • 256GB SSD drive for local scratch
    • QDR InfiniBand HCA & 2x 1GigE links
  • 4x high-memory compute nodes, same configuration as (GP) but with 256GB of RAM
  • 6x high-memory compute nodes, same configuration as (GP) but with 512GB of RAM
  • 1x high-memory compute node, same configuration as (GP) but with 1TB of RAM
  • 4x high-memory compute nodes with the following configuration:
    • 2x Intel Xeon processors with 14 cores each, for a total of 28 cores
    • 1TB of RAM
    • 256GB SSD drive for local scratch
    • QDR InfiniBand HCA & 2x 1GigE links
  • 2x multi-GPU compute nodes with the following configuration:
    • 2x Intel Xeon processors with 8 cores each, for a total of 16 cores
    • 128 GB of RAM
    • 256GB SSD drive for local scratch
    • QDR InfiniBand HCA & 2x 1GigE links
    • 8x NVIDIA Tesla K40 GPU cards
  • 36x general-purpose (GP) compute nodes with the following configuration:
    • 2x Intel Xeon processors with 14 cores each, for a total of 28 cores
      64 GB of RAM
      256GB SSD drive for local scratch
      QDR InfiniBand HCA & 2x 1GigE links
  • a QDR InfiniBand inter-nodal fabric arranged in a 2:1 fat tree configuration. Switches are 2x Intel TrueScale 12800-120 and 14x Intel TrueScale 12300
  • 8x 48-port Brocade Gigabit Ethernet switches each with 6-port 10GigE uplinks for the management network as well as external access

Additionally, over 100 compute nodes in various configurations, have been purchased through the Faculty Buy-In program.

Data storage for the GACRC-supported clusters is currently provided by

  • 1x Seagate (Xyratex) ClusterStor 1500 Lustre appliance with 480TB usable capacity, used for high-performance scratch on the cluster.
  • 3x Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 99TB usable capacity, provisioned with 600GB 15krpm drives. Used for $HOME on the cluster.
  • 2x Penguin IceBreakers storage chains running ZFS mounted through NFS for a total of 560TB usable capacity. This storage is used as an active project repository on the cluster.
  • A 720TB disk-based backup environment. The backup environment will serve the above-mentioned storage devices, except for the scratch space provisioned on both clusters.

Operation and Maintenance

The GACRC has a fulltime technical staff of seven, specializing in Linux/UNIX system administration, storage administration, scientific computing, in support of researchers using the GACRC-managed resources. The GACRC serves over 250 principal investigators and over 1,200 total users.

 

The GACRC has access to the following expertise:

  • HPC cluster computing system administration, including cluster design, operating systems, job scheduling software, network design and administration, operating system security;
  • Storage administration, including user data management, hardware troubleshooting, performance optimization, optimal availability, data security and subsystem design/configuration;
  • System integration and administration using programming and scripting for data conversions, data analysis, and data migration;
  • Software selection, installation, maintenance and troubleshooting, based on researchers' needs, open source solutions, commercial offerings;
  • Assistance in debugging of HPC parallel computing programs, offering consultation and assistance to researchers and their staff;
  • Consultation and training in the use of computational science tools and referential databases;