Skip to Main Content

Proposal Writing Information

Georgia Advanced Computing Resource Center Useful Information for Proposal-writing

The text below is intended to provide most of the information your proposal will need, if it includes reference to the GACRC for support of HPC resources or consulting in the use of HPC resources for your proposed research. We do not envision that each proposal will need all information. Please contact GACRC staff (gacrc@uga.edu) for assistance in crafting language for your specific proposal, if required.

If your grant proposal includes the use of computing resources to be under the stewardship of the GACRC, whether you intend to purchase new hardware or software or use what is already available to you, the GACRC can assist you in the following ways (given adequate lead time):

  • Writing cost justifications and Management Plan(s) for IT resources
  • Preparing an IT budget
  • Soliciting budget proposals from prospective hardware, software and/or service providers
  • Conducting final negotiations with IT hardware, software and/or service providers before procurement

Brief GACRC Overview (Typical Grant Proposal Text)

The Georgia Advanced Computing Resource Center's (GACRC) equipment is located in UGA's Boyd Data Center (BDC). The GACRC has a fulltime staff of Systems Administrators and Scientific Computing Consultants, specializing in Linux/UNIX system administration, storage administration, and scientific computing consultation. One Linux cluster is available with a total core count of approximately 30,700 compute-cores. In addition to conventional compute nodes, the cluster has several large memory and GPU specific nodes. High-performance storage for the Linux clusters is provided for users' home directories and temporary scratch space. Slower storage resources are available for long-term project needs. The home directories, as well as the long-term project storage are backed-up to separate storage devices. 

The computational and storage resources are available free of charge to UGA researchers and students. A Faculty Buy-In program is also in place which provides prioritized access to the GACRC-administered computational resources.

The GACRC regularly hosts training sessions on a number of subjects relevant to the use of its computational and storage resources. Prospective UGA users are required to attend an introductory session before being granted access to any of the GACRC compute resources. 

The GACRC manages over 900 software packages, utilities, compilers and libraries. Of these over 650 are bioinformatics related. 

Additional Details (As Needed)

Major Equipment: The computational resources at the GACRC encompasses a Linux cluster as well as a multi-tiered storage environment that serves it.

The GACRC's current research-dedicated Linux cluster, called Sapelo2, presents the following resources, purchased over a number of years since 2014. Included in this list are over 150 compute nodes in various configurations, purchased through the GACRC’s Faculty Buy-In program. The overall core count of the cluster as it stands in May 2022 is 30,700 with a total memory footprint of 122.6TB.

The following tables present a detailed overview of the various compute node configurations present on Sapelo2.  

Two inter-nodal InfiniBand fabrics coexist within the Sapelo2 cluster. The older is a QDR InfiniBand fabric arranged in a 2:1 fat tree configuration. Switches are 2x Intel TrueScale 12800-120 and 14x Intel TrueScale 12300. The newer fabric is based on Mellanox EDR SB7800/SB7890 switches, also arranged in a 2:1 fat tree configuration. 

 

QDR Section of Sapelo2

processor

node count

cores

memory (GB)

GPU type

number of GPU

AMD Opteron 6344

132

48

128 

-

-

AMD Opteron 6344 20  48 256  - -
AMD Opteron 6344 48 512  - -
AMD Opteron 6344 48 1,024  - -
Intel Xeon X7560  32  512  - -
Intel Xeon E5-2650 v2 16  128  K40 8 each
Intel Xeon E5-2637 v3 128  - -
Intel Xeon E5-2660 v3 1 20 256 - -
Intel Xeon E5-2680 v3 24  128  K80 1 each
Intel Xeon E5-2680 v3 24  128  - -
Intel Xeon E5-2680 v3 1 24 512 - -
Intel Xeon E5-2640 v4 1 20 256 - -
Intel Xeon E5-2680 v4 32  28 64  - -
Intel Xeon E5-2680 v4 28  128  - -
Intel Xeon E5-2680 v4 15  28  256  - -
Intel Xeon E5-2680 v4 28  256  P100 1 each
Intel Xeon E5-2680 v4 28  512  - -
Intel Xeon E5-2680 v4 1 28 1,024 - -
Intel Xeon Gold 5120  28  192  V100 1 each
Intel Xeon Gold 5118  24  192  - -
Intel Xeon Gold 6130 12 32 96 - -

 

 

EDR Section of Sapelo2

 

 processor

node count

cores 

memory (GB) 

GPU type 

number of GPU 

Intel Xeon E5-2680 v4 34 28 64 - -
Intel Xeon E5-2680 v4 1 28 1,024 - -
 Intel Xeon Gold 6130  42 32 192 - -
  Intel Xeon Gold 6130 4 32 192 P100 1 each
  Intel Xeon Gold 6130 2 32 384 V100 1 each
 Intel Xeon Gold 5120 2 28 192 V100  1 each
 AMD EPYC 7551P 66 32 128 -
 AMD EPYC 7551  2 64 128 - -
 AMD EPYC 7551 1 64 128 V100 1 each
AMD EPYC 7551 2 64 128 V100 2 each
AMD EPYC 7551P 8 32 256 - -
AMD EPYC 7551P 16 32 512 - -
AMD EPYC 7551 4 64 1,024 - -
AMD EPYC 7502P 7 32 128 - -
AMD EPYC 7702 4 128 512 - -
AMD EPYC 7702P 133 64 128 - -
AMD EPYC 7702P 4 64 128 V100S 1 each
AMD EPYC 7702P 4 64 256 - -
AMD EPYC 7702P 2 64 512 - -
AMD EPYC 7543 1 64 1,024 A100 4 each
AMD EPYC 7F32 2 16 128 - -
AMD EPYC 7713P 14 64 128 - -
AMD EPYC 7713P 4 64 256 - -
AMD EPYC 7713 24 128 512 - -
AMD EPYC 7763P 2 64 128 - -
AMD EPYC 7F52 2 32 2,048 - -

 

 

Data storage available at the GACRC includes the following –

  • DDN SFA14KX Lustre appliance with 2.50PB usable capacity, used for high-performance scratch and data-intensive related activities.
  • Panasas AS100H with 1PB of usable capacity, used as an active project repository.
  • ZFS-based storage chain (server+JBOD) for a total of 300TB usable capacity. Used for $HOME on the cluster.
  • ZFS-based storage chains (server+JBOD) for a total of 3PB usable capacity, used as an active project repository, and disk-based backup environment .

Operation and Maintenance

The GACRC has a fulltime staff of ten, specializing in Linux/UNIX system administration, storage administration, scientific computing, in support of researchers using the GACRC-managed resources. The GACRC serves over 340 principal investigators and over 1,400 total users.

The GACRC has access to the following expertise:

  • HPC cluster computing system administration, including cluster design, operating systems, job scheduling software, network design and administration, operating system security;
  • Storage administration, including user data management, hardware troubleshooting, performance optimization, optimal availability, data security and subsystem design/configuration;
  • System integration and administration using programming and scripting for data conversions, data analysis, and data migration;
  • Software selection, installation, maintenance and troubleshooting, based on researchers' needs, open source solutions, commercial offerings;
  • Assistance in debugging of HPC parallel computing programs, offering consultation and assistance to researchers and their staff;
  • Consultation and training in the use of computational science tools and referential databases;