The CESG has several computing resources available for students, faculty, and authorized users to use for research, projects, and coursework. The primary workhorse is the CESG cluster.
Support
For any and all support issues with the CESG cluster (including adding new packages needed for research), please send an email explaining the problem to linux-engr-helpdesk@tamu.edu. This will be converted into a trackable ticket for resolution.
Access to the Cluster
The CESG cluster’s logins use netid and password (not_ the user/pass used on the old dropzone cluster). If you need a login on the CESG cluster, please email linux-engr-helpdesk@tamu.edu. Be sure to include the name of the professor in the CESG group you are working with. You should also cc your professor on the email to the helpdesk to streamline the process. Be sure to let the professor know that he/she is expected to send a confirmation email.
The domain names for the new machines are:
ecesvj10101.ece.tamu.edu
ecesvj10102.ece.tamu.edu
ecesvj10103.ece.tamu.edu
ecesvj10104.ece.tamu.edu
ecesvj10105.ece.tamu.edu
ecesvj10106.ece.tamu.edu
ecesvj10107.ece.tamu.edu
ecesvj10108.ece.tamu.edu
You can find the utilization statistics here. If you are off campus, you need to have the TAMU VPN active to view the statistics page and/or connect to the cluster.
(Note: More machines will be coming online in the next few months.)
Scheduled Maintenance
The cluster will be shut down completely three times a year for scheduled maintenance. Note that all running jobs will be killed during this shutdown, so please plan your long running jobs around this.
Cluster Hardware
Here are some stats on the new cluster:
4 High Performance Compute nodes (nodes: ecesvj10101.ece.tamu.edu to ecesvj10104.ece.tamu.edu), each node contains:
Processors: 2X – Intel Xeon E5-2697A V4 @ 2.6GHz
Cores/node: 32, Threads/node: 64
Memory per node: 512GB
OS: CentOS Linux release 7.3.1611 (Core)
4 Mid-Performance Compute nodes (nodes: ecesvj10105.ece.tamu.edu to ecesvj10108.ece.tamu.edu), each of these nodes contains:
Processors: 4X – AMD Opteron(tm) Processor 6176 @ 2.3GHz
Cores/node: 12, Threads/node: 48
Memory per node: 256GB
OS: CentOS Linux release 7.3.1611 (Core)
Shared among all nodes:
Shared cluster disk: 159TB
(Be sure to check Ganglia before starting new jobs to pick the node with the least load.)
Interacting with the Cluster
These machines are “headless”, i.e. they are not meant for interactive or GUI use but rather for batch jobs that are started remotely. To this end, it is recommended that you use “ssh”, potentially in combination with “screen” or similar tools to batch off your jobs. Note that long running full desktop environment GUIs will be killed automatically, likely losing your work, since they are resource hogs. You can however use ssh to port an X window or two, to display output data for instance, as needed.
Here are some pages that may help you get started:
How To Use SSH to Connect to a Remote Server
Using GNU Screen to Manage Persistent Terminal Sessions
Windows SSH client – PuTTY a popular windows SSH client
Xwindows server for Windows – Xming, also some good info here on porting your cluster Xwindow display to your personal machine
(also, google is your friend…)
The CESG Cluster Administrators