Technology Resources

The CESG has several computing resources available for students, faculty, and authorized users to use for research, projects, and coursework. The primary workhorse is the CESG cluster.


For any and all support issues with the CESG cluster (including adding new packages needed for research), please send an email explaining the problem to This will be converted into a trackable ticket for resolution.

Access to the cluster

The CESG cluster’s logins use netid and password (_not_ the user/pass used on the old dropzone cluster).  If you need a login on the CESG cluster, please email, be sure to include the name of the professor in the CESG group you are working with.  You should also cc your professor on the email to the helpdesk to streamline the process. Be sure to let the professor know that he/she is expected to send a confirmation email.

The domain names for the new machines are:


You can find the utilization statistics here. If you are off campus, you need to have the TAMU VPN active to view the statistics page and/or connect to the cluster.

(note: more machines will be coming online in the next few months)

Scheduled Maintenance

The cluster will be shut down completely three times a year for scheduled maintenance.  Note that all running jobs will be killed during this shutdown so please plan your long running jobs around this.  The shutdowns will occur on the following dates over the next 12 months:

May 25th, 2018
Aug 24th, 2018
Jan 11th, 2019

(and repeating yearly on approximately those dates)

Cluster Hardware

Here are some stats of the new cluster:

4 High Performance Compute nodes (nodes: to, each node contains:

Processors: 2X – Intel Xeon E5-2697A V4 @ 2.6GHz
Cores/node: 32, Threads/node: 64
Memory per node: 512GB
OS: CentOS Linux release 7.3.1611 (Core)

4 Mid Performance Compute nodes (nodes: to, each of these nodes contains:

Processors: 4X – AMD Opteron(tm) Processor 6176 @ 2.3GHz
Cores/node: 12, Threads/node: 48
Memory per node: 256GB
OS: CentOS Linux release 7.3.1611 (Core)

Shared among all nodes:

Shared cluster disk: 159TB

(Be sure to check Ganglia before starting new jobs, to pick the node with the least load)

Interacting with the cluster

These machines are “headless”, ie the are not meant for interactive or GUI use, but rather for batch jobs that are started remotely.  To this end it is recommended that you use “ssh”, potentially in combination with “screen”, or similar tools to batch off your jobs.  Note that long running full desktop environment GUIs will be killed automatically, likely losing your work, since they are resource hogs.  You can however use ssh to port an X window or two, to display output data for instance, as needed.

Here are some pages that may help you get started:

How To Use SSH to Connect to a Remote Server

Using GNU Screen to Manage Persistent Terminal Sessions

Windows SSH client – PuTTY a popular windows SSH client

Xwindows server for Windows  – Xming, also some good info here on porting your cluster Xwindow display to your personal machine

(also, google is your friend…)

The CESG Cluster Administrators