Skip to Main Content

Proposal Writing Information

Georgia Advanced Computing Resource Center Useful Information for Proposal-writing

The text below is intended to provide most of the information your proposal will need, if it includes reference to the GACRC for support of HPC resources or consulting in the use of HPC resources for your proposed research. We do not envision that each proposal will need all information. Please contact GACRC staff (gacrc@uga.edu) for assistance in crafting language for your specific proposal, if required.

If your grant proposal includes the use of computing resources to be under the stewardship of the GACRC, whether you intend to purchase new hardware or software or use what is already available to you, the GACRC can assist you in the following ways (given adequate lead time):

  • Writing cost justifications and Management Plan(s) for IT resources
  • Preparing an IT budget
  • Soliciting budget proposals from prospective hardware, software and/or service providers
  • Conducting final negotiations with IT hardware, software and/or service providers before procurement

Brief GACRC Overview (Typical Grant Proposal Text)

The Georgia Advanced Computing Resource Center's (GACRC) equipment is located in UGA's Boyd Data Center (BDC). The GACRC has a fulltime staff of Systems Administrators and Scientific Computing Consultants, specializing in Linux/UNIX system administration, storage administration, and scientific computing consultation. One Linux cluster is available with a total core count of approximately 38,600 compute-cores. In addition to conventional compute nodes, the cluster has several large memory and GPU specific nodes. High-performance storage for the Linux clusters is provided for users' home directories and temporary scratch space. Slower storage resources are available for long-term project needs. The home directories, as well as the long-term project storage are backed-up to separate storage devices. 

The computational and storage resources are available free of charge to UGA researchers and students. A Faculty Buy-In program is also in place which provides prioritized access to the GACRC-administered computational resources.

The GACRC regularly hosts training sessions on a number of subjects relevant to the use of its computational and storage resources. Prospective UGA users are required to attend an introductory session before being granted access to any of the GACRC compute resources. 

The GACRC manages over 1,200 software packages, utilities, compilers and libraries.

Additional Details (As Needed)

Major Equipment: The computational resources at the GACRC encompasses a Linux cluster as well as a multi-tiered storage environment that serves it.

The GACRC's current research-dedicated Linux cluster, called Sapelo2, presents the following resources, purchased over a number of years since 2018. Included in this list are over 120 compute nodes in various configurations, purchased through the GACRC’s Faculty Buy-In program.

 

The overall core count of the cluster as it stands in September 2023 is 38,640 with a total memory footprint of 176.2TB.

 

The following tables present a detailed overview of the various compute node configurations present on Sapelo2.  

The cluster's inter-nodal fabric is an EDR InfiniBand based on Mellanox EDR SB7800/SB7890 switches, arranged in a 2:1 fat tree configuration. 

 

 processor

node count

cores 

memory (GB) 

GPU type 

number of GPU 

Intel Xeon E5-2680 v4 1 28 1,024 - -
Intel Xeon Gold 5118 1 24 96 - -
 Intel Xeon Gold 5120 4 28 192 V100  1 each
Intel Xeon Gold 5320 1 52 256 - -
Intel Xeon Gold 6126 1 24 256 - -
Intel Xeon Gold 6130 12 32 96 - -
Intel Xeon Gold 6130 42 32 192 - -
Intel Xeon Gold 6130 4 32 192 P100 1 each
Intel Xeon Gold 6130 2 32 384 V100 1 each
Intel Xeon Gold 6154 1 36 256 - -
Intel Xeon Gold 6242R 1 40 192 - -
Intel Xeon Gold 6254 1 36 256 - -
Intel Xeon Gold 6434 1 16 512 - -
 AMD EPYC 7551P 54 32 128 -
 AMD EPYC 7551  2 64 128 - -
 AMD EPYC 7551 1 64 128 V100 1 each
AMD EPYC 7551 2 64 128 V100 2 each
AMD EPYC 7551P 8 32 256 - -
AMD EPYC 7551P 18 32 512 - -
AMD EPYC 7551 2 64 1,024 - -
AMD EPYC 7502P 7 32 128 - -
AMD EPYC 7702 4 128 512 - -
AMD EPYC 7702P 133 64 128 - -
AMD EPYC 7702P 4 64 128 V100S 1 each
AMD EPYC 7702P 4 64 256 - -
AMD EPYC 7702P 2 64 512 - -
AMD EPYC 7543 9 64 1,024 A100 4 each
AMD EPYC 7F32 2 16 128 - -
AMD EPYC 7713P 23 64 128 - -
AMD EPYC 7713P 6 64 256 - -
AMD EPYC 7713 121 128 512 - -
AMD EPYC 7713 25 128 1,024 - -
AMD EPYC 7753 1 64 256 - -
AMD EPYC 7763P 2 64 128 - -
AMD EPYC 7F32 2 16 128    
AMD EPYC 7F52 2 32 2,048 - -
AMD EPYC 9534 16 128 768 - -

 

 

Data storage available at the GACRC includes the following –

  •  Hybrid Lustre appliance with 2.40PB usable flash capacity,  and 9.60PB usable HDD capacity used for high-performance scratch and data-intensive related activities.
  • Panasas AS100H with 1PB of usable capacity, used as an active project repository.
  • ZFS-based storage chain (server+JBOD) for a total of 300TB usable capacity. Used for $HOME on the cluster.
  • ZFS-based storage chains (server+JBOD) for a total of 8PB usable capacity, used as an active project repository, and disk-based backup environment .

Operation and Maintenance

The GACRC has a fulltime staff of nine, specializing in Linux/UNIX system administration, storage administration, scientific computing, in support of researchers using the GACRC-managed resources. The GACRC serves over 450 principal investigators and over 1,400 total users.

The GACRC has access to the following expertise:

  • HPC cluster computing system administration, including cluster design, operating systems, job scheduling software, network design and administration, operating system security;
  • Storage administration, including user data management, hardware troubleshooting, performance optimization, optimal availability, data security and subsystem design/configuration;
  • System integration and administration using programming and scripting for data conversions, data analysis, and data migration;
  • Software selection, installation, maintenance and troubleshooting, based on researchers' needs, open source solutions, commercial offerings;
  • Assistance in debugging of HPC parallel computing programs, offering consultation and assistance to researchers and their staff;
  • Consultation and training in the use of computational science tools and referential databases.