Skip to Main Content

Department of Computer Science

Technical Services and Support

Computing Facilities and Services: Summary

There are a variety of facilities described throughout these pages. This is a summary, with pointers to the pages describing specifics. See the New Users section for documentation. In that section

Here are a summary of our facilities:

Linux Systems

For specific hostnames and locations of the systems listed below, see Configurations of all generally-available Linux systems run by LCSR. Unless noted otherwise, our systems run current  Ubuntu LTS (Long Terms Service).

  • Generally available (within CS) systems (iLab Cluster): available to grad students, researchers, undergraduate majors and undergraduates taking CS courses. For historical reasons this collection of systems is referred to as the “ilab.”
    • Over 50 desktop systems in Hill 248, 252, and 254. as well as in graduate student offices in Hill and CORE. 16 or 32 GB with CUDA-compatible GPUs. Details and availability of ilab desktops.  
    • Note for grad student offices: We find that most students prefer to use their own laptops in TA and RA offices. Thus by default we are providing displays and other support for laptops. We have desktop systems if you want them. Let us know if  your need a different setup.
    • Virtual address pointing to a set of 4 multi-user systems, with 1 TB of memory, up to 96 vcores, and 8 Nvidia A4000 GPUs each.
    • This system is used to test new LTS versions of the Ubuntu operating system. Currently it has Ubuntu 22.04, while our other systems have Ubuntu 22.04. iLabU is a system with 256G of memory, 48 vcores, and 8 Nvidia 2080 TI GPUs. 
    • Data science tools. Spark and map-reduce are available on all of our systems. Jupyter notebooks are supported.
    • Researchers should avoid large CPU usage during times with peak instructional use on the iLab servers (host names starting with ilab) and the systems in student public areas.
    • Limits: There are limits to memory and GPU usage on iLab cluster. If you’re running long jobs or GPU intensive jobs, please See Limits Enforced on CS Machines and Scheduler for GPU Jobs
  • Faculty office desktops: these systems in faculty offices, with up to 32 GB of memory and not part of iLab cluster.
  • Faculty servers: are 2 Ubuntu linux VMs, not part of iLab cluster. These are intended primarily so that faculty can do grading and other tasks on systems to which only faculty have access. Connect to   
  • General Research systems, also part of iLab cluster, available to faculty, grad students, and other researchers. – Systems intended for jobs using GPUs and/or large memory. 512 GB of memory (except rlab1, which is 1.5 TB), 95 cores, 8 Nvidia 1080TI GPU. Jobs on these systems use the Slurm scheduler to allocate GPUs and memory.
  • Private Research systems, available to faculty and their own groups. These are additional systems run by researchers that are not available to the general community and not part of iLab cluster. Most of them use Ubuntu Linux. Many of them have GPUs
  • Web hosting: You can put HTML and other files intended for web access in your public_html directory, located in /common/web/$USER.  They will be visible as  See Publishing web pages or CS Homepage Manager for more info. We also maintain a WordPress system for project web pages.
  • General approach: Home directories are enough for many users. They are on a file server that uses SSD. For those who need more space,
    • /common/users can be used. It has larger quotas.
    • Faculty can also request special project directories.These can have quotas of 10s of terabytes if necessary.
    • However /common/users and those project directories are on spinning disk,  although the server has 50 disks in mirrored pairs, with metadata on SSD, so it has good performance if you can use multiple processes for I/O.
  • If you have a project that needs SSD performance, and won’t fit in your home directory, there are several options. But they all assume that you’ll only keep the files you’re currently working on on SSD. For long-term storage they’ll need to go to /common/users or a project directory. These directories are not backed up, although we’re willing to backup local directories on faculty-owned systems if necessary.
    • All of our systems have local SSD. Generally the directories are /data/local, but  a few have other names. In all cases they are set up so any user can create a directory. However, these file systems are cleaned out roughly once a semester. 
    • If you  need fast storage that you can access from more than one system, /filer/tmp1 is available. Like local SSD, this is intended for storage of your working data, not permanent storage. It is cleaned out once a semester
  • Home Directories: User home directories and other shared storage are on two Linux NFS servers. Home directories are in /common/home. They are on SSD storage, with quotas of 50 GB, or 200 GB for faculty and PhD students.
  • /common/users: For those who need more storage, all generally available systems mount /common/users. User quotas are 100 GB on this system or 1TB for faculty and PhD students.  This intended to be used where large capacity is needed, but not as high performance.
  • /common/web: This is used for web pages. See Publishing web pages
  • /research/archive. For long-term storage of data that funding agencies require to be kept. Please let us know, if you need this service. 
  • We can normally accommodate other needs with special file systems.  
  • Shared Directories: For projects and teams that want to share storage, /common/users/shared is set so that any user can create a directory and set it to be shared by a group. See Making A Directory You Can Share. Note that files in these directories still count against your quota on /common/users.
  • Backups: Home directories and other storage are snapshotted, and backed up both in a separate building (CBIM or Hill) and monthly at a commercial offsite storage facility.
  • Local storage: Most systems have some local storage, often mounted as /local. This storage is NOT BACKED UP. It is intended for jobs that need large work files. Source files and results should be stored on your home directory or /common/users.
  • Access from systems not run by LCSR: Home directories, /common/users, and other file systems can be mounted on research systems run by faculty, as long as those systems use Kerberos authentication. See Integrating Your Systems With LCSR Kerberos.
  • See Storage Technology Options for more details.
Virtual Machines

 They are two kinds of Virtual machines facility in the department. Virtual Machines using for Academic use and virtual machines for instructional and research needs. Below are descriptions of LCSR runs virtual machines.

LCSR runs a large number of virtual machines, both for its own internal use, and for a variety of instructional and research needs. VMs can use CentOS, Ubuntu, or Windows. Typically faculty/staff asked for special VMs and users who runs/request VMs act as their own system administrators, but LCSR is willing to do updates for system software. 

  • There are two 1 TB servers hosting VMs for instructional and student needs. These are commonly used in courses that have requirements not met by shared instructional systems. 
  • Most commonly LCSR staff will work with the instructor to configure appropriate software, and then will duplicate a master copy for each student or student team. There is a web interface that allows students to start, stop, and restore their VM to its initial configuration. Grad students may request personal VMs if needed for their own projects.
  • There are servers running VMware ESXi. These are used for a variety of services, such as web servers and administrative applications. LCSR infrastructure such as the Kerberos servers also run on these VMs. Faculty may request VMs for their use. They’re commonly used for special purpose servers, and for applications in support of research projects.
  • There is a WordPress system set up to host web sites for research projects. It has a web interface that will allow faculty to create and administer their own sites, using templates that default to Rutgers standards, but can be customized. For more information, see Computer Science Web Hosting. (You can also make web pages available by putting them in public_html on our shared computer systems. See Publishing web pages or CS Homepage Manager for more info)

Hackerspace has a collection of special-purpose devices intended for courses and student projects. These include

  • Makerbot 3D printer
  • Systems on a disconnected network, for students to try security attacks
  • Arduinos and related hardware
  • Parts for building electronic equipment
  • Small robotics equipment, such as Lego Mindstorm and iRobot Create
  • ARDrone
  • VR equipment

We have a budget to buy special-purpose devices as needed for courses, but we try to keep one or two of devices that we think will be useful for future courses and projects.


A large data network interconnects all of LCSR’s facilities. The wireline network contains 64 switches, 166 10 Gbps ports, 1316 1 Gbps ports, and 528 100 Mbps ports, on 222 VLANs. The core is a mix of 100 and 40 Gbps. This network is used by all systems in computer science, even if LCSR doesn’t run the systems.

LCSR, in cooperation with the University, supports a wireless network covering all areas of the department.

Outside connectivity is provided via the University’s access to the Internet, Internet2, and various special-purpose networks. LCSR also maintains an extensive security infrastructure over the network, including firewalls, custom intrusion detection software, and provides post-mortem analysis of compromised machines.

Directory Services and Authentication

LCSR maintains a set of 3 servers running Redhat’s IPA. This is a combination of LDAP and Kerberos. All systems maintained by LCSR use this for authentication and user information. We encourage faculty to use these services for systems that they run. By using LCSR Kerberos and directory services, systems will be able to access shared file systems. It also provides an easy way to maintain the list of authorized users for a set of systems. See Integrating Your System with LCSR Kerberos.

In addition to this systems, LCSR has a variety of data about faculty, staff and students, in a set of Oracle databases used for administrative applications. Please contact if you need this data for an application.

Facilities Outside Computer Science

OARC is a University group that provides high-performance computing. Computer science in general doesn’t have a conventional HPC cluster. We concentrate on GPUs and more specialized hardware. For large-scale HPC and data science, OARC is the best source. They have a large cluster, Amarel. It is intended as a “condo” cluster. I.e. grants buy nodes, and are guaranteed at least as much capacity as they purchased. The cost is matched by the University. However some capacity is available for those who haven’t bought into the system, particularly for course work and student use.

For more information see the OARC web site.

LCSR Services

In addition to running computing systems, LCSR provides support for faculty and students in computer science. Some commonly used services are

  • Planning. This includes helping to identify the best resources for both instructional and research use, and help in configuring systems for purchase. LCSR encourages faculty to talk with us about courses or projects that will have special requirements.
  • Hardware installation, network configuration, and support for hardware purchased by faculty and not administered by LCSR.
  • Support. LCSR provides help for users of facilities it runs. But it also provides assistance in setting up and solving problems on systems run by researchers.
  • Programming. LCSR can provide staff to do programming for research projects. We have both full-time and student programmers available.

For help with our systems or If you need immediate assistant, visit LCSR Operator at CoRE 235 or call 848-445-2443. Otherwise, see CS HelpDesk. Don’t forget to include your NetID along with descriptions of your problem.

For planning and infrastructure support, contact the LCSR director.