Limitations Enforced On CS Linux Machines-Last Modified July 10, 2022 – Added Sessions and GPU limits and remove old methods. Sep 3, 2022 – Added slurm exception NOTE: Please consult this file before running big or long jobs. We may change these limits as we see fit from time to time. The following limitations are enforced on specified CS Linux machines by default. To learn how to overcome these limitations, please consult this page before running a big job. If you are using Scheduler for GPU Jobs (slurm), CPU and Memory limit described here do not apply to you. Here are a list of Limitations enforced on CS Machines.
1. Lifetime of accounts and files
When users leave the University (or computer science), their accounts are closed. Files are archived. The files will be deleted after a year, except for faculty files. Shared directories will be deleted or archived based on the user that owns them.
Sometimes users will have a continuing association with the Department even after leaving. Accounts may be continued as guest or retiree accounts. Any faculty member can sponsor a guest but retirees sponsorship are handled by the Computer Science Department Human Resource.
2. Sessions and CPU limits on ALL CS machines
Many X2GO, Remote Desktop, screen, tmux, terminator, ssh, nohup etc users would like to resume their session at a later time after disconnecting or keep their program running after they logged out. There also many users who would like to keep their big jobs running for days. At the same time due to the educational nature of our environment, we have many runaway programs that were not properly terminated and wasting resources.
To efficiently manage resources on CS iLab Machines, we have implemented a simple way to manage user sessions. User can manage CPU utilization and how long the session should stay running without risk of being terminated by the system with just a single command. The command is
keep-job N, where N is a number of daytime hours you want your job to continue.
To control your session and CPU usage time:
open a terminal window and type:
where 30 means you get 30 hours to get back to your disconnected session. Note: to get to a terminal window in JupyterHub, you need to open a notebook of type Terminal.
Note on CPU hours:
- CPU hours is CPU usage time. A job gets a minimum of 24 hours CPU usage time. If you set N below 24, the system wont terminate it until your CPU usage hours goes above 24 hours.
- If you are using 4 CPU cores, your process will only run for 6 hour to reach 24 hours maximum CPU limit. After 24 CPU hours, your processes will be terminated unless you renew your time limit. Once you specify a limit, the system won’t interrupt any of your jobs until that limit has expired. The time starts from when you invoke the
keep-jobcommand. If you need more time, just rerun
keep-jobcommand before time expires.
- To see whether you have job that’s nearing your CPU hours, type
sessions -l.These commands will show the total CPU time for each current session.
3. MEMORY LIMIT: On iLab machines
Below memory limit are preset on the system and can’t be adjusted by end user. If your memory need is high, make sure you pick machine with most amount of memory available to you. When you run low on memory, Linux oomkiller will terminate your job automatically. We have a script that watches the log and notify you when this happens so you are aware of the issues with your codes. Here are the current details of memory limits:
- on ilab*.cs.rutgers.edu, maximum memory per user is 80GB. ilab*.cs.rutgers.edu have tuned profile virtual-host to reduce its swapiness.
- on data*.cs.rutgers.edu, jupyter.cs.rutgers.edu (hadoop cluster), maximum memory per user is 32GB. data*.cs.rutgers.edu have tuned profile virtual-guest to reduces swapiness
- on aurora.cs.rutgers.edu, maximum memory per user is 480GB.
- on other servers or desktops, maximum memory per user is 50% of physical memory. Desktops have the default tuned profile, which is balanced. You could argue that desktop would be slightly better.
Note: ilab* and data* both have swap space, on Solid State Drive.
sessions command will show the amount of memory you’re using. If you have more than one process or thread, this may be less than the sum of usage by each process, because memory is often shared between processes.
If you want to see the amount of memory used by each process, a reasonable approximation is
ps ux, in the RSS column. That is in KB. However RSS shows only what is in memory. If some of your process has been swapped out it won’t be included. On most of our systems it’s unusual to swap out active jobs.
Memory is set in different ways by different tools. Generally it’s a parameter on the command line, or a configuration option within the notebook. See documentation for the tool you’re using.
4. GPU LIMITS: On iLab Server cluster
Most of our GPUs machines are now under slurm scheduler which has its own policy. On non slurm managed machine with 8 GPUs, maximum GPU you can use is 4.
nvidia-smi will give you a list of 4 GPUs assigned to you on login randomly.
On machines with Nvidia RTX A4000, users are advised to turn on TF32 to take advantage of the new GPUs. The RTX A4000 enables two FP32 primary data paths, doubling the peak FP32 operations.
Best GPU Utilization: We recommend GPU users to utilize Job Scheduler to avoid GPU and Memory limits described in #2 and #3 above.
5. Storage quota Limit
Every home directory
/common/home has disk quota set. However, there are other disk spaces that users can use to do their work with much bigger quota along with no quota storage. For details on these storage options and limit, see Storage and Technology options page.
6. Blacklisting System: On ALL CS machines
When our machines detect abnormal activities, it may put remote machines in a blacklist. This blacklist will block any listed machine attempt to connect. If you have issue connecting to CS machines, make sure to click to check if your IP is blocked and how to get around the block.
7. Logging in with SSH Public Private Key
Pubic/Private key is convenience with security implications. Convenience applies to both users and attackers. For security reasons, we don’t recommend it.
As of fall 2017, we moved to kerberized Network File System which requires kerberos ticket to access your home directory and other network storage. Logging in using public/private keys will not get user the kerberos credential, and so file access won’t work.
We do however allow users to login between our machines without additional password. This could be very useful if you need to access research machines which are on private IP and not accessible from outside Rutgers via SSH. To avoid additional login, first, login to iLab machines. Once logged in, you can ssh to other research machines without additional password.
Additionally, you can also setup Kerberos authentication with your home machine if you want to avoid multiple login.