Data size and quotas#
This section explains how to find the actual space and inode usage of /home/ and /project allocations on Grex. We limit the size of the data and the number of files that can be stored on these filesystems. The table provides a “default” storage quota on Grex. Larger quota can be obtained on /project via UM local RAC process.
File system | Type | Total space | Bulk Quota | Files Quota |
---|---|---|---|---|
/home | NFSv4/RDMA | 15 TB | 100 GB / user | 0.5M per user |
/project | Lustre | 2 PB | 5-20 TB / group | 1M / user, 2M / group |
To figure out where your current usage stands with the limit, POSIX quota or Lustre’s analog, lfs quota, commands can be used. A convenient command, diskusage_report summarizes usage and quota across all the available filesystems.
NFS quota#
The /home/ filesystem is served by NFSv4 and thus supports the standard POSIX quota command. For the current user, it is just:
quota
or
quota -s
The command will result in something like this (note the -s flag added to make units human readable):
[someuser@yak ~]$ quota -s
Disk quotas for user someuser (uid 12345):
Filesystem space quota limit grace files quota limit grace
192.168.x.y:/ 249M 100G 105G 4953 500k 1000k
The output is a self-explanatory table. There are two values: soft “quota” and hard “limit” per each of (space, files) quotas. If you are over soft quota, the value of used resource (space or files) will have a star * to it, and a grace countdown will be shown. Getting over grace period, or over the hard limit prevents you from writing new data or creating new files on the filesystem. If you are over quota on /home, it is time to do some cleaning up there, or migrate the data-heavy items to /global/scratch where they belong.
Lustre quota#
There are two Lustre storage appliances on Grex,/project and /global/scratch/ . Since 2022, the main project filesystem on Grex is /project . The previous filesystem was called /global/scratch/ and , as of now, is disabled and not available for users.
On a Lustre filesystem, three types of quota are possible: per user, across all the filesystem; per group, across all the filesystem; and a directory quota that is per directory, per group.
/project/ (current)#
This filesystem has a similar hierarchical directory structure to the Alliance/ComputeCanada HPC systems. On Grex, it is like follows:
- /project/Project-GID/{user1, user2, user3}
Where the Project-GID is a number (or identifier) of the PI’s default RAPI group in CCDB, and user1..3 are the users on Grex, including the PI.
It is inconvenient to go by using numerical values of the Project-GID in the paths, so there are symbolic links present in each user’s /home/$USER/projects directory that point to his /project directories. A user can belong to more than one research group and thus can have more than one project link. Also, on the filesystem there is a system of symbolic links in the form of /project/Faculty/def-PIname/ .
[someuser@yak ~]$ lfs quota -h -p 123456 /project/123456
Disk quotas for prj 123456 (pid 123456):
Filesystem used quota limit grace files quota limit grace
/project/123456
150.6G 4.883T 5.371T - 208654 2000000 2200000 -
In addition to the directory quota, each user has her own quota, presently for inodes (the number of files and directories on the entire filesystem.
/global/scratch/ (old)#
The /global/scratch/ filesystem is actually a link to a Lustre filesystem called /sbb/. We have retained the old name for compatibility with the old Lustre filesystem that was used on Grex between 2011 and 2017. Lustre filesystem provides a lfs quota sub-command that requires the name of the filesystem specified. So, for the current user, the command to get current usage {space and number of files}, in the human-readable units, would be as follows:
lfs quota -h -u $USER /sbb
With the output:
[someuser@yak ~]$ lfs quota -h -u $USER /sbb
Disk quotas for usr someuser (uid 12345):
Filesystem used quota limit grace files quota limit grace
/sbb 622G 2.644T 3.653T - 5070447 6000000 7000000 -
Presently the group or directory quotas on /global/scratch are not enforced.
If you are over quota on Lustre /global/scratch filesystem, just like for NFS, there will be a star to the value exceeding the limit, and the grace countdown will be active.
A wrapper for quota#
To make it easier, we have set a custom script with the same name as for the Alliance (Compute Canada) clusters, diskusage_report, that gives both /home and /global/scratch quotas (space and number of files: usage/quota or limits), as in the following example:
[someuser@bison ~]$ diskusage_report
------------------------------------------------------------------------
Description (FS) Space (U/Q) # of files (U/Q)
------------------------------------------------------------------------
/home (someuser) 254M/104G 4953/500k
/project (def-professor) 131G/2147G 992k/1000k
------------------------------------------------------------------------
/project/6543210 = /home/someuser/projects/def-professor
------------------------------------------------------------------------
The command diskusage_report can also be invoked with the argument –home or –project to get the quota for the corresponding file system.