Skip to Main Content

Department of Computer Science

Technical Services and Support

Computing Facilities and Services: Summary

There are a variety of facilities described throughout these pages. This is a summary, with pointers to the pages describing specifics. See the New Users section for documentation. In that section

Here are a summary of our facilities:

Linux Systems

For specific hostnames and locations of the systems listed below, see Configurations of all generally-available Linux systems run by LCSR. Many of these systems use Ubuntu 20.04 as of August 2021.

  • Generally available (within CS) systems: available to grad students, researchers, undergraduate majors and undergraduates taking CS courses. For historical reasons this collection of systems is referred to as the “ilab.”
    • 73 desktop systems in Hill 248, 252, and 254. as well as in graduate student offices in Hill and CORE. 16 or 32 GB, most have CUDA-compatible GPUs. Details and availability of ilab desktops. (The systems in graduate offices used to be a separate cluster. As of summer 2019 they were merged into the ilab and accessible by all iLab users)
    • Virtual address pointing to a set of 4 multi-user systems, with 1 TB of memory, 80 cores, and 8 Nvidia A4000 GPUs each.
    • Virtual address pointing to a set of 4 multi-user systems with 256 GB of memory, 32 cores and 2 Nvidia 2080 Ti GPUs each. These systems are upgraded to new Ubuntu LTS during January break.
    • Data science tools. Spark and map-reduce are available on all of our systems. Jupyter are supported as notebook interfaces.
    • Researchers should avoid large CPU or GPU usage during times with peak instructional use on the main iLab servers (host names starting with ilab) and the systems in student public areas.
    • Limits: There are limits to memory usage on these systems. If you’re running long jobs, you need to register them so we know you’re doing it intentionally. See Limits Enforced on CS Machines
  • Faculty office desktops: 8 systems in offices, with 16 or 32 GB of memory.
  • Faculty servers: 2 VMs, with 16 GB of memory. These are intended primarily so that faculty can do grading and other tasks on systems to which only faculty have access. Connect to
  • General Research system, available to faculty and other researchers. has 1 TB of memory and 80 cores, with local SSD storage. Note that anyone can actually log into aurora. However we ask you to use it only for research projects that require large jobs, whether in amount of memory or number of cores. We need to keep down the number of people using it, in order to allow a few large jobs. ilab and ilabU also have large memory and many cores, and are sufficient for most jobs.
  • Private Research system, available to faculty and their own groups. These are additional systems run by researchers that are not available to the general community. Most of them use Ubuntu Linux. Many of them have GPUs
  • Web hosting: You can put HTML and other files intended for web access in your public_html directory, located in /common/web/$USER. See Publishing web pages for details They will be visible as also maintain a WordPress system for project web pages. See the VM section below.
  • General approach: Home directories are enough for many users. They are on a file server that uses SSD. For those who need more space,
    • /common/users can be used. It has larger quotas.
    • Faculty can also request special project directories.These can have quotas of 10s of terabytes if necessary.
    • However /common/users and those project directories are on spinning disk,  although the server has 50 disks in mirrored pairs, with metadata on SSD, so it has good performance.
  • If you have a project that needs SSD performance, and won’t fit in your home directory, there are several options. But they all assume that you’ll only keep the files you’re currently working on on SSD. For long-term storage they’ll need to go to /common/users or a project directory. These directories are not backed up, although we’re willing to backup local directories on faculty-owned systems if necessary.
    • All of our systems have local SSD. Generally the directories are /data/local, but  a few have other names. In all cases they are set up so any user can create a directory. However, these file systems are cleaned out roughly once a semester. 
    • If you  need fast storage that you can access from more than one system, /filer/tmp1 is available. Like local SSD, this is intended for storage of your working data, not permanent storage. It is cleaned out once a semester
  • Home Directories: User home directories and other shared storage are on two Linux NFS servers. Home directories are in /common/home. They are on SSD storage, with quotas of 50 GB, or 200 GB for faculty and PhD students.
  • /common/users: For those who need more storage, all generally available systems mount /common/users. User quotas are 100 GB on this system or 1TB for faculty and PhD students.  This intended to be used where large capacity is needed, but not as high performance.
  • /common/web: This is used for web pages. See Publishing web pages
  • /research/archive. For long-term storage of data that funding agencies require to be kept. Please contact if you need this service. 
  • We can normally accommodate other needs with special file systems. Please contact
  • Shared Directories: For projects and teams that want to share storage, /common/users/shared is set so that any user can create a directory and set it to be shared by a group. See Making A Directory You Can Share. Note that files in these directories still count against your quota on /common/users.
  • Backups: Home directories and other storage are snapshotted, and backed up both in a separate building (CBIM or Hill) and monthly at a commercial offsite storage facility.
  • Local storage: Most systems have some local storage, often mounted as /local. This storage is NOT BACKED UP. It is intended for jobs that need large work files. Source files and results should be stored on your home directory or /common/users.
  • Access from systems not run by LCSR: Home directories, /common/users, and other file systems can be mounted on research systems run by faculty, as long as those systems use Kerberos authentication. See Integrating Your Systems With LCSR Kerberos.
Virtual Machines

LCSR runs a large number of virtual machines, both for its own internal use, and for a variety of instructional and research needs. VMs can use CentOS, Ubuntu, or Windows. Typically users who request VMs act as their own system administrators, but LCSR is willing to do updates for system software.

  • There are two 1 TB servers hosting VMs for instructional and student needs. These are commonly used in courses that have requirements not met by shared instructional systems. Requests should be sent to Most commonly LCSR staff will work with the instructor to configure appropriate software, and then will duplicate a master copy for each student or student team. There is a web interface that allows students to start, stop, and restore their VM to its initial configuration. Grad students may request personal VMs if needed for their own projects.
  • There are about 5 servers running VMware. These are used for a variety of services, such as web servers and administrative applications. LCSR infrastructure such as the Kerberos servers also run on these VMs. Faculty may request VMs for their use. They’re commonly used for group web servers, and for applications in support of research projects.
  • There is a WordPress system set up to host web sites for research projects. It has a web interface that will allow faculty to create and administer their own sites, using templates that default to Rutgers standards, but can be customized. For more information, see Computer Science Web Hosting. (You can also make web pages available by putting them in public_html on our shared computer systems.)

Hackerspace has a collection of special-purpose devices intended for courses and student projects. These include

  • Makerbot 3D printer
  • Systems on a disconnected network, for students to try security attacks
  • Arduinos and related hardware
  • Parts for building electronic equipment
  • Small robotics equipment, such as Lego Mindstorm and iRobot Create
  • ARDrone
  • VR equipment

We have a budget to buy special-purpose devices as needed for courses, but we try to keep one or two of devices that we think will be useful for future courses and projects.


A large data network interconnects all of LCSR’s facilities. The wireline network contains 64 switches, 166 10 Gbps ports, 1316 1 Gbps ports, and 528 100 Mbps ports, on 222 VLANs. The core is a mix of 100 and 40 Gbps. This network is used by all systems in computer science, even if LCSR doesn’t run the systems.

LCSR, in cooperation with the University, supports a wireless network covering all areas of the department.

Outside connectivity is provided via the University’s access to the Internet, Internet2, and various special-purpose networks. LCSR also maintains an extensive security infrastructure over the network, including firewalls, custom intrusion detection software, and provides post-mortem analysis of compromised machines.

Directory Services and Authentication

LCSR maintains a set of 3 servers running Redhat’s IPA. This is a combination of LDAP and Kerberos. All systems maintained by LCSR use this for authentication and user information. We encourage faculty to use these services for systems that they run. By using LCSR Kerberos and directory services, systems will be able to access shared file systems. It also provides an easy way to maintain the list of authorized users for a set of systems. See Integrating Your System with LCSR Kerberos.

In addition to this systems, LCSR has a variety of data about faculty, staff and students, in a set of Oracle databases used for administrative applications. Please contact if you need this data for an application.

Facilities Outside Computer Science

OARC is a University group that provides high-performance computing. Computer science in general doesn’t have a conventional HPC cluster. We concentrate on GPUs and more specialized hardware. For large-scale HPC and data science, OARC is the best source. They have a large cluster, Amarel. It is intended as a “condo” cluster. I.e. grants buy nodes, and are guaranteed at least as much capacity as they purchased. The cost is matched by the University. Howver some capacity is available for those who haven’t bought into the system, particularly for course work and student use.

For more information see the OARC web site.

LCSR Services

In addition to running computing systems, LCSR provides support for faculty and students in computer science. Some commonly used services are

  • Planning. This includes helping to identify the best resources for both instructional and research use, and help in configuring systems for purchase. LCSR encourages faculty to talk with us about courses or projects that will have special requirements.
  • Hardware installation, network configuration, and support for hardware purchased by faculty and not administered by LCSR.
  • Support. LCSR provides help for users of facilities it runs. But it also provides assistance in setting up and solving problems on systems run by researchers.
  • Programming. LCSR can provide staff to do programming for research projects. We have both full-time and student programmers available.

For help in using systems, please contact For planning and programming support, contact the LCSR director,