This page describes various storage provided by LCSR, which you can use. However you can use the same technology to set up your own file servers, or to allow files on your server or desktop to be available from other systems.
The underlying approach we use is NFS, a Network File System that is supported for Unix and Linux, and to some extent also MacOS and Windows. Specifically, we use NFS version 4 with Kerberos authentication. Version 4 with Kerberos allows incorporation of systems run by different system administrators, which may not always have consistent UID and GIDs, and may have varying levels of security.
We also recommend setting up the /net virtual file system. That allows you to access files on any system that permits it, without having to get your system administrator to add a mount. Thus is provides the illusion that file systems on all of our servers are part of a single, combined name space.
Systems run by LCSR already have the necessary software. For information on integrating your systems into our Kerberos infrastructure, see Setting up Kerberos and Related Services.
Storage available with Computer Science
A. Home Directories
Users’ home directories are located on the LCSR NetApp, and on several petabyte Linux-based file servers. Users are given quotas on the NetApp in order to minimize their impact both on other users on the same qtree (NetApp’s version of a partition) and the smooth functioning of the NetApp itself. Each machine in its cluster share the same home directory storage and accessible at:
- /ilab/users for ILab machines
- /grad/users for Graduate machines
- /res/users for Research machines
- /fac/users for Faculty machines.
These directories are backed up and lost files are recoverable under our restore service level. (Essentially, we keep backups for up to two months and have a fall back position in case of hardware failure.)
Accounts are given home directory quota limits in order to share this very expensive resource. Should you find your initial quota too small, you should consider other shared disks like /common/users, /filer/tmp and /freespace/local below. If there are justifiable reasons, you can request additional space (please give an estimate of how much more you need, how long you believe you will need it and a short justification of what you need it for) in an email to “email@example.com“. (Our guideline for max quotas is 5GB for faculty, 3GB for students.)
If you runs out of space in your home directory, you can find out where your disk space went by opening a terminal or via ssh client and type:
du -a ~ | sort -rn | more.
B. Shared Local Filesystems
- Freespace (available only on iLab/Grad/Fac desktops machines)
Most machines have some extra non quota disk space set aside in a partition called /freespace/local. You should note that these filesystems restrictions below.
o Files are not backed up
o Files are not counted toward your disk quota
o Files in /freespace/local are automatically deleted when machines are re-installed without prior warning.
o All files may be removed between end and start of a semester or when we run out of space without prior warning.
- aurora local disk (available only on aurora.cs.rutgers.edu)
On aurora.cs.rutgers.edu only, there are additional extra temporary local disks. Please be a “good citizen” in your usage of these filesystems. Don’t fill them up or leave large amounts of data there for long periods of time.
o Files are not counted toward your quota.
o Files are accessible via /aurora.cs/local1 …/aurora.cs/local9 and /aurora.cs/ssd
o These filesystems are not backed up.
o Non accessed files will be removed when we run out of space without prior warning.
C. Computer Science Remote Filesystems
- Common Users
Beginning mid Summer 2018, we have new storage that can be used by all users.
o There is a 100 GB quota limit.
o Can be accessed in /common/users/your_netid.
o It is accessible from all CS Faculty/Research/Dresden/Research/Grad/iLab cluster.
o This file system has a single daily backup and there is no snapshot.
- Filer Tmp
LCSR has a raided (so any single disk failure will not cause data loss) filer with about 9 TB space available on it. The space is split into two separate temporary filesystems. Usage on the filer is a matter of public record. More details on the filer are contained in a warning file in the root of each filesystem called README.this-filesystem-is-not-backed-up.
The filesystems are separated as follow:
o /filer/tmp1 (~6 TB) accessible from the faculty and research clusters and
o /filer/tmp2 (~3 TB) accessible from the faculty, dresden, research, grad and ilab clusters.
o Files in these filesystems are not counted towards your quota
o These filesystems are not backed up.
o This space is wiped clean at the beginning of every semester and summer.
- Project space on our NetApp
Larger amounts of space can be arranged for on our NetApp for individuals or groups needing aggressively backed up space. This space is generally subject to the same restore policy that our home directories on the NetApp are. (Essentially, we keep backups for up to two months and have a fall back position in case of hardware failure.) This space must be requested by the DCS faculty member working on the project. For details, see our page on project disk space.
- Petabyte filers
CBIM currently has two 1 PB file servers, but reasonable amounts of space are available for other CS users. File storage can be requested through firstname.lastname@example.org.
D. Cloud Storage
- Dropbox – No longer works due to new requirement of glibc 2.19 or higher and ext4 file system as of Oct 15 2018.
Use Scarletmail Google Drive. You have unlimited space there instead of 2GB.
- Google Drive.
University has made an arrangement for an unlimited disk space for Google Drive under the Scarlet Apps system. All CS Linux machines can be linked to Google Drive. Please see Connecting Google Drive with CS Linux Machines. Note that this can only be used if you are using Gnome graphical interface locally or via Microsoft Remote Desktop client. This does not work if you access via ssh or X2Go client. Alternatively, you can use rclone(cmd line) or rclone-browserGUI) by following https://rclone.org/drive/ for configuration.
To start rclone-browser, follow:
Note: if you setup mount points, make sure you set it at /tmp/mountPointName, where mountPointName is a name of your choosing. Example: your_netid-gdrive
The University is in the process of arranging an unlimited capacity contract with box.com. Access to box.com is via webdav. Performance is good for copying large files, not good for operations creating lots of small files. When the arrangement is made public, we will install a script that automates setting up webdav. For systems you manage, install the davfs2 package. Then you can use mount -t davfs https://dav.box.com. Note that Box will be disabling WebDAV access on Jan 31, 2019. If you must use Box.com, please use rclone(cmd line) or rclone-browserGUI) by following https://rclone.org/box/ for configuration.
To start rclone-browser, follow:
Note: if you setup mount points, make sure you set it at /tmp/mountPointName, where mountPointName is a name of your choosing. Example: your_netid-box
- If there are other cloud services you want to access, contact email@example.com. There are tools avaiable for many services, but many of them aren’t things we’d want to make generally available.
- If you are using VMs or storage in Amazon or similar environments, we’re willing to look at setting up links from systems here to them. There are Linux tools to access Amazon file systems and their competitors.
E. Other Storage
If you maintain your own disk space you would like to access them from our machines or want to access our file systems from home computers, below are a few options.
LCSR has installed sshfs on our primary clusters. You can also install it on system you run, as well as home systems. sshfs allows you to mount file systems to which you have access on any computer where you can login via ssh. Performance is at least as good as an NFS mount, and often better. In most cases this is a better option than WebDAV. For details, see accessing files remotely.
On the faculty and research machines, we have enabled /net automounting. So if your hostname is “myhost”, and you NFS export the filesystem “/my/directory/” to the faculty or research machines, you will be able to automount your files by connecting to “/net/myhost/my/directory”. See sharing files for more details.
- See our how to page to learn more about available File Sharing options.