Excerpt |
---|
There are multiple filesystems mounted to each of Pawsey's supercomputers. Each of these filesystems are designed for particular use cases. This page provides a detailed description of these filesystems. |
...
Column | ||||||
---|---|---|---|---|---|---|
|
Overview
The following filesystems are available from one or more Pawsey supercomputing systems:
/home
- which should be used to store software configuration files that cannot be easily located elsewhere./software
- Lustre filesystem which should contain both Pawsey and researcher software installations and Slurm batch scripts./scratch
- Lustre filesystem which should contain working data in use by jobs that are actively queued and running on the supercomputer/astro
- Lustre filesystem which supports operational radio astronomy observatory work
These filesystems can be viewed using the df
command from the login nodes:
...
Apart from /home
, all are Lustre distributed filesystems. Lustre is an open-source, high performance parallel file system optimised for high throughput.
...
...
While Pawsey is migrating to Setonix, there are existing filesystems still in use by Garrawarla.
- The existing
/astro
filesystem, which will be replaced by the new/scratch
filesystem on Setonix.
The filesystems are different in many ways and are designed to facilitate different activities in supercomputing. The intended usage for each of them is explained below. Use outside of these purposes may cause poor performance for a particular activity as well as create detrimental impacts to other users.
...
Column | ||
---|---|---|
|
Home filesystem
The home filesystem should be used to store software configuration files. It is a Networked FileSystem (NFS). Each user has a default login directory in the /home filesystem with a quota of 1 GB and 10,000 individual files.
...
Column | ||
---|---|---|
|
...
|
Current usage of the /home
filesystem can be viewed by executing the quota
command:
...
Due to its small quota limit and low performance, the /home
filesystem is not suitable for launching or storing production work. Files such as software installations and Slurm batch scripts should be stored on the /software
filesystem. Working data, such as job input and output, should use the /scratch
file system.
What to do if you exceeded your quota
First thing to do is to identify those directories that contain a large number of files or those files that are too large and are consuming your quota. Then delete them.
Identifying subdirectories with a large number of files
You can use the following command that finds the subdirectories recursively and list them in descending order of containing files. Execute this command from your $HOME
directory:
...
Then you can check the file $MYSCRATCH/homeSubdirectoriesRanked.out
and decide what subdirectories to remove. Note that the output is written in $MYSCRATCH
because you may have not enough quota to write in $HOME
.
Identifying large files
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
|
Then you can decide which files to remove. Note that you could have used the last filter (head -n 10
) also in the previous command to avoid a large output of lines, or you could have used here the same final filters as in the previous command in order to save output into a file for a later careful check.
Hidden files
...
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
Further explanation about quotas, permissions and copy (cp)
vs move (mv)
of files and directories is given in the sections below.
Software filesystem
The /software
filesystem is a Lustre file system with much higher throughput than /home
. It is intended for software installations and Slurm batch script templates. Each project has an associated directory on the filesystem whose path is /software/projects/<project>
. Within a project directory, each project member has his or her own directory whose full path, /software/projects/<project>/<username>
and generating a symbolic link in $HOME
.
...
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ mv .vscode-server $MYSOFTWARE/ # if .vscode_server already exists
$ mkdir $MYSOFTWARE/.vscode-server # if the directory does not exist yet
$ cd $HOME
$ ln -s $MYSOFTWARE/.vscode-server # generate a symbolic link, make sure you are in $HOME |
Software filesystem
The /software
filesystem is a Lustre file system with much higher throughput than /home
. It is intended for software installations and Slurm batch script templates. Each project has an associated directory on the filesystem whose path is /software/projects/<project>
. Within a project directory, each project member has his or her own directory whose full path, /software/projects/<project>/<username>
, is contained in the MYSOFTWARE
environment variable.
There are two types of quota in place on /software
:
- A project-wide quota of 256GB on the amount of used disk space, and
- a per-user quota of 100,000 individual files. Notice that files belonging to different projects count towards the same user quota. In other words, a user can have a maximum of 100k files across all the projects she is involved in.
...
Note |
---|
The software filesystem is intended for storage of software installations and Slurm batch scripts for the lifetime of the project. |
All members of a project have read and write access to the /software/projects/<project>
directory, so it can be used for sharing software installations and batch script templates within a project. Your allocation of space on /software
exists for the duration of the project and is not subject to any automatic purging.
Quotas on disk space usage are managed per project group. If any member of the project exceeds the shared project quota on /software
, it will affect the whole project and will be unable to save data (you may see a 'quota exceeded' message')
The project-wide quota consumption can be queried using the following command, is contained in the MYSOFTWARE
environment variable.
There are two types of quota in place on /software
:
- A project-wide quota of 256GB on the amount of used disk space, and
- a per-user quota of 100,000 individual files. Notice that files belonging to different projects count towards the same user quota. In other words, a user can have a maximum of 100k files across all the projects she is involved in.
Column | ||
---|---|---|
|
All members of a project have read and write access to the /software/projects/<project>
directory, so it can be used for sharing software installations and batch script templates within a project. Your allocation of space on /software
exists for the duration of the project and is not subject to any automatic purging.
Quotas on disk space usage are managed per project group. If any member of the project exceeds the shared project quota on /software
, it will affect the whole project and will be unable to save data (you may see a 'quota exceeded' message')
The project-wide quota consumption can be queried using the following command:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
whereas the per-user quota usage can be queried in the following way:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
...
|
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ lfs quota -u $USER -h /software
Disk quotas for usr user1234 (uid xxxx):
Filesystem used quota limit grace files quota limit grace
/software 14.16G 0k 0k - 49053 0 100000 - |
Scratch filesystem
The scratch filesystem should be used for working data, which is input and output files actively used by jobs queued or running on the supercomputer.
Each project has a directory /scratch/<project>
in which each project member has a subdirectory /scratch/<project>/<username>
.
/scratch
is a Lustre filesystem and its location is available in the environment variable $MYSCRATCH.
$ echo $MYSCRATCH
The scratch file system is not intended for long-term storage, is not backed up and is purged on a regular basis. If you wish to retain files, move them to the Acacia object storage.
...
Warning | ||
---|---|---|
| ||
Files which have not been accessed for the purge period of 30 days will be deleted automatically and WILL BE LOST. See Filesystem Policies. |
The /scratch
filesystem has the highest performance of the available filesystems, and allows jobs to temporarily use large amounts of storage while running. However, to maintain high performance for all users, there are limits of 2PB per project and 2 million files per user.
The project usage of the /scratch
filesystem can be checked by using the following command, replacing [project]
with your project name:
$ lfs quota -g $PAWSEY_PROJECT -h /scratch
To ensure that /scratch
remains available to support jobs actively running on the system, it is critical to move files off the filesystem to a more permanent storage as workflows complete. The copy
partition on Setonix can be used for these data transfer jobs.
Leaving files to be removed by the 30-day purge policy places an unnecessary load on the filesystem as the system is scanned for these files, and causes less capacity to be available for other users.
...
Tip |
---|
To minimise load on the filesystem, use the For more details, refer to Deleting large numbers of files. |
...
Reference datasets
Reference data sets are static data required by software for calibrations or testing or as widely used input data. Reference data sets that are used by several project groups will be provided on /scratch
by Pawsey to avoid multiple copies existing. These data sets will be contained in subdirectories of /scratch/references.
Examples include:
/scratch/references/askap
/scratch/references/mwa
/scratch/references/blastdb_update
These reference datasets will be exempt from the /scratch
purge policy.
The specific bioinformatics reference datasets available are:
- 10x single cell gene expression
- 10x spatial gene expression
- Alphafold
- Arabidopsis thaliana
- Blast+ database (regularly updated)
- Diamond
- Human Broad bundle hg19, Broad bundle hg38, and GRCh38
- Interproscan-5.56-89.0
- Metagenome_atlas_2.9
- Mouse Broad bundle mm10, NCBI MM10, UCSC GRCm38, RNA M25
- Qiime
- Sarek
- VEP
For more information, see the Life Science and Bioinformatics page.
If you would like to request addition of a new reference dataset, please email the Pawsey Helpdesk help@pawsey.org.au
File permissions and quota
The effect of file permissions and ownership on storage quotas varies depending on which filesystem the data is located. The default behaviour can be summarised as such:
- Files created in a user's
/home
are accessible only to that user. - Files created in a user's
/software
,/scratch
or/astro
directories are accessible only to that user and to members of the same project.
For more detail on these filesystems refer to Filesystem Policies. The filesystem quotas are summarised in table 1:
...
Table 1. Pawsey filesystems: capacity, file limit and duration
...
The default group membership for files and directories that are created in /home
is the user's primary group, which is the same as their user ID. Files and directories that are created in any of the Lustre filesystems are associated by default with the user's project ID.
For the /software
filesystem, Pawsey uses a file's group ownership to calculate its effect on storage quotas. To make use of the group quota for a project, files must be associated with the group corresponding to that project ID.
A user is always a member of their own primary group (which is the same as their own username) and can also be a member of more than one project. This is important to know because files created with a group associated to a username rather than a project are limited to a default quota of 1GB and there can be at most 100 of them.
...
Warning |
---|
If you encounter a write error, compiler error, or file transfer error on the |
Tip |
---|
You should proactively and regularly monitor both file count and quota usage across the filesystems. This practice will reduce your likelihood of hitting the quota limits; whenever this happens, no files can be written until usage is brought back below quota. As regards the |
File permissions are also important to consider. Here are the default permissions of a file named myscript.sh
that was created in a user's home directory:
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ ls -ld myscript.sh
-rwxr-xr-x 1 username username 2 Nov 30 16:33 myscript.sh |
Recall that Linux file permissions are broken down into three groups of three:
...
rwx
...
The first set of permissions determines what actions can be performed by the owner of the file. In this case username
is the owner, and is allowed to read (r), write (w), and execute the file (x).
...
r-x
...
The second set of permissions determines what actions can be performed by other users who belong to the same group as the file. The group here is the primary or default group of the file's owner, which is username
. Group members are allowed to read and execute.
...
r-x
...
The final set of permissions apply to all other users. While the permissions are set to read and execute, the top-level user directory ( /home/username
) is locked to just the user, so no others are able to read, write, or execute files in another user's home directory.
Now look at the difference for a file created in /software
:
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ ls -ld swscript.sh
-rwxr-xr-x 1 username projectgroup 2 Nov 30 16:51 swscript.sh |
The file permissions are the same as before, but with different group ownership: projectgroup
. Other members of projectgroup
will be able to read and execute this script. Similar to myscript.sh
, the permissions for "all other users" are set to read and execute (r-x
), but the top level group directory ( /software/projects/swgroup
) is locked to just the group so that others who are not in the group cannot access any files within it:
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ ls -ld /software/projects/projectgroup
drwxrws--- 46 root projectgroup 4096 Nov 29 09:06 /software/projects/projectgroup |
Note there is a new flag in the group permissions, the SETGID flag (s
). With the SETGID flag set on the directory, whenever a user creates a new file under /software/projectgroup
, the group ownership is set to the same as the group owner of the directory, as opposed to setting the group ownership to the user who created it. So, in the example above, any file created under /software/projectgroup
will have a group ownership of projectgroup
instead of username
.
The SETGID flag on your project's group directory is set when Pawsey staff first set up the new project so there's no need for users to modify this. However, there are situations where a user might accidentally modify permissions or ownership when moving files. For example, if a user moves a file from /home
to /software
(instead of copying it) the group ownership is not changed:
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ touch foo.txt
$ ls -ld foo.txt
-rw-r--r-- 1 username username 0 Nov 29 17:02 foo.txt
$ mv foo.txt $MYSOFTWARE
$ ls -ld $MYSOFTWARE/foo.txt
-rw-r--r-- 1 username username 0 Nov 29 17:03 /software/projects/projectgroup/username/foo.txt |
In terminal 5, foo.txt
was created in a directory on /home
. As a result, the group ownership is set to user's group ( username
). The file was then moved it to the /software
filesystem, and you can see that the original permissions and group remained. The file foo.txt
will count against the user's quota instead of the project, even though it is located in /software
.
The solution is to use the copy command (cp
) instead of move (mv
) when transferring files from /home
to /software
. This is because cp
actually creates a new file, which inherits the SETGID flag from the top-level group directory:
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ touch bar.txt
$ ls -ld bar.txt
-rw-r--r-- 1 username username 0 Nov 29 17:05 bar.txt
$ cp bar.txt $MYSOFTWARE
$ ls -ld $MYSOFTWARE/bar.txt
-rw-r--r-- 1 username projectname 0 Nov 29 17:06 /software/projects/projectname/username/bar.txt |
When transferring files between filesystems, you will see the above behaviour and require this workaround. When using cp
, do not use the -a
or -p
flags. If you want to preserve timestamps, use cp --preserve=timestamps
.
File transfer programs like WinSCP can also cause issues with permissions and groups. You should consult the documentation of your preferred transfer program. rsync
users should avoid using the -a
and -p
flags; these flags will preserve permissions of the source files, which may conflict with the default behaviour on Pawsey systems. Some additional information about file transfer programs is at: Transferring Files in/out Pawsey Filesystems.
Pawsey provides a tool that lets you fix file and directory permissions on /software
. The fix.group.permission.sh
script is available in the pawseytools module, which is loaded by default. To use it, enter the script name followed by your group name. For example, if your project ID is projectgroup
you would enter this:
$ fix.group.permission.sh projectgroup
...
Note | ||
---|---|---|
| ||
|
There is a manual way of doing this in your own area using the find
command. Replace projectgroup
with your project ID and username
with your user name.
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ find /software/projects/projectgroup/username ! -group projectgroup -exec chgrp projectgroup \{} \;
$ find /software/projects/projectgroup/username -type d ! -perm /g=s -exec chmod g+s \{} \; |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ find /scratch/projectgroup/username ! -group projectgroup -exec chgrp projectgroup \{} \;
$ find /scratch/projectgroup/username -type d ! -perm /g=s -exec chmod g+s \{} \; |
The extra tests for the find
commands in terminal 7 and terminal 8 speed up the process for many files, by only changing files and directories that need to be changed.
Astronomy filesystem
The Astonomy Filesystem /astro
is a Lustre filesystem provided for the scratch space needs of the MWA group who perform computations on the Garrawarla cluster.
It is an SGI/HPE provided cluster of nodes backed by DDN storage. The system currently contains 2 Metadata servers (MDS) with 2 Metadata targets (MDT). It has 4 Object Store servers (OSS) for storing data and they have 48 Object Store targets (OST). This gives approximately 2.7 PB of usable storage. It has a possible read and write speed of over 10GB/s and Pawsey staff have been easily getting 7-8GB/s when using only the four copyq nodes to transfer data around using dcp. The single threaded IO speed is greater than on /scratch due to the updated version of Lustre.
The expandability of Lustre means that the filesystem can be expanded, without downtime, by adding more OSS's and OST Disk behind them in groups of 2 (for high availability).
Location
The Astronomy filesystem is mounted on all Garrawarla nodes and Setonix data mover nodes as /astro
. The top level directory has directories for all the areas that /astro
has:
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ ls -l /astro/
total 20
drwxrws--- 36 root mwaeor 4096 Oct 28 09:52 mwaeor
drwxrws--- 12 root mwaops 4096 Sep 30 16:31 mwaops
drwxrws--- 47 root mwasci 4096 Dec 11 16:56 mwasci
drwxrwsr-x 57 root mwavcs 4096 Dec 16 16:46 mwavcs
drwxrws--- 27 root pawsey0001 4096 Dec 17 12:22 pawsey0001 |
The pawsey0001 directory is for Pawsey testing of the system and can be set up in different ways as needed. It will not often be used.
Quotas
At the time of writing MWA have requested that mwaeor, mwavcs, mwaops and mwasci are assigned 370 TB, 580 TB, 20 TB and 600 TB respectively.
To check the current quota use the following command:
$ lfs quota -g projectcode /astro
Usage
To check usage you can use the df
command to check the entire filesystem. This command gives a breakdown by OST and a summary at the bottom.
...
width | 900px |
---|
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
$ lfs df -h /astro/
UUID bytes Used Available Use% Mounted on
astrofs-MDT0000_UUID 542.1G 26.0G 479.4G 6% /astro[MDT:0]
astrofs-MDT0001_UUID 542.1G 32.7G 472.7G 7% /astro[MDT:1]
astrofs-OST0000_UUID 57.7T 26.6T 28.1T 49% /astro[OST:0]
astrofs-OST0001_UUID 57.7T 26.4T 28.4T 49% /astro[OST:1]
astrofs-OST0002_UUID 57.7T 25.9T 28.8T 48% /astro[OST:2]
astrofs-OST0003_UUID 57.7T 28.6T 26.2T 53% /astro[OST:3]
astrofs-OST0004_UUID 57.7T 26.9T 27.9T 50% /astro[OST:4]
astrofs-OST0005_UUID 57.7T 26.6T 28.2T 49% /astro[OST:5]
astrofs-OST0006_UUID 57.7T 26.3T 28.4T 49% /astro[OST:6]
astrofs-OST0007_UUID 57.7T 26.8T 28.0T 49% /astro[OST:7]
astrofs-OST0008_UUID 57.7T 26.8T 27.9T 50% /astro[OST:8]
astrofs-OST0009_UUID 57.7T 26.3T 28.5T 48% /astro[OST:9]
astrofs-OST000a_UUID 57.7T 26.6T 28.2T 49% /astro[OST:10]
astrofs-OST000b_UUID 57.7T 26.8T 28.0T 49% /astro[OST:11]
astrofs-OST000c_UUID 57.7T 25.6T 29.2T 47% /astro[OST:12]
astrofs-OST000d_UUID 57.7T 27.1T 27.6T 50% /astro[OST:13]
astrofs-OST000e_UUID 57.7T 27.0T 27.8T 50% /astro[OST:14]
astrofs-OST000f_UUID 57.7T 26.5T 28.3T 49% /astro[OST:15]
astrofs-OST0010_UUID 57.7T 37.3T 17.5T 69% /astro[OST:16]
astrofs-OST0011_UUID 57.7T 38.1T 16.7T 70% /astro[OST:17]
astrofs-OST0012_UUID 57.7T 37.5T 17.3T 69% /astro[OST:18]
astrofs-OST0013_UUID 57.7T 38.0T 16.7T 70% /astro[OST:19]
astrofs-OST0014_UUID 57.7T 38.0T 16.7T 70% /astro[OST:20]
astrofs-OST0015_UUID 57.7T 37.1T 17.6T 68% /astro[OST:21]
astrofs-OST0016_UUID 57.7T 37.1T 17.6T 68% /astro[OST:22]
astrofs-OST0017_UUID 57.7T 38.0T 16.7T 70% /astro[OST:23]
astrofs-OST0018_UUID 57.7T 37.4T 17.4T 69% /astro[OST:24]
astrofs-OST0019_UUID 57.7T 37.7T 17.1T 69% /astro[OST:25]
astrofs-OST001a_UUID 57.7T 38.3T 16.5T 70% /astro[OST:26]
astrofs-OST001b_UUID 57.7T 37.4T 17.4T 69% /astro[OST:27]
astrofs-OST001c_UUID 57.7T 37.5T 17.2T 69% /astro[OST:28]
astrofs-OST001d_UUID 57.7T 37.1T 17.7T 68% /astro[OST:29]
astrofs-OST001e_UUID 57.7T 37.8T 17.0T 70% /astro[OST:30]
astrofs-OST001f_UUID 57.7T 37.8T 16.9T 70% /astro[OST:31]
astrofs-OST0020_UUID 57.6T 34.6T 20.2T 64% /astro[OST:32]
astrofs-OST0021_UUID 57.6T 34.3T 20.5T 63% /astro[OST:33]
astrofs-OST0022_UUID 57.6T 34.9T 19.9T 64% /astro[OST:34]
astrofs-OST0023_UUID 57.6T 33.6T 21.1T 62% /astro[OST:35]
astrofs-OST0024_UUID 57.6T 33.5T 21.2T 62% /astro[OST:36]
astrofs-OST0025_UUID 57.6T 35.0T 19.7T 64% /astro[OST:37]
astrofs-OST0026_UUID 57.6T 33.7T 21.0T 62% /astro[OST:38]
astrofs-OST0027_UUID 57.6T 34.1T 20.6T 63% /astro[OST:39]
astrofs-OST0028_UUID 57.6T 33.5T 21.2T 62% /astro[OST:40]
astrofs-OST0029_UUID 57.6T 33.6T 21.1T 62% /astro[OST:41]
astrofs-OST002a_UUID 57.6T 34.2T 20.5T 63% /astro[OST:42]
astrofs-OST002b_UUID 57.6T 33.8T 21.0T 62% /astro[OST:43]
astrofs-OST002c_UUID 57.6T 34.9T 19.8T 64% /astro[OST:44]
astrofs-OST002d_UUID 57.6T 34.1T 20.7T 63% /astro[OST:45]
astrofs-OST002e_UUID 57.6T 33.8T 20.9T 62% /astro[OST:46]
astrofs-OST002f_UUID 57.6T 34.5T 20.2T 64% /astro[OST:47]
filesystem_summary: 2.7P 1.5P 1.0P 60% /astro |
Related pages
...
Scratch filesystem
The scratch filesystem should be used for working data, which is input and output files actively used by jobs queued or running on the supercomputer.
Each project has a directory /scratch/<project>
in which each project member has a subdirectory /scratch/<project>/<username>
.
/scratch
is a Lustre filesystem and its location is available in the environment variable $MYSCRATCH.
$ echo $MYSCRATCH
Column | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
The /scratch
filesystem has the highest performance of the available filesystems, and allows jobs to temporarily use large amounts of storage while running. However, to maintain high performance for all users, there are limits of 2PB per project and 2 million files per user.
The project usage of the /scratch
filesystem can be checked by using the following command, replacing [project]
with your project name:
$ lfs quota -g $PAWSEY_PROJECT -h /scratch
To ensure that /scratch
remains available to support jobs actively running on the system, it is critical to move files off the filesystem to a more permanent storage as workflows complete. The copy
partition on Setonix can be used for these data transfer jobs.
Leaving files to be removed by the 21-day purge policy places an unnecessary load on the filesystem as the system is scanned for these files, and causes less capacity to be available for other users.
Column | ||
---|---|---|
|
Anchor | ||||
---|---|---|---|---|
|
Reference datasets
Reference data sets are static data required by software for calibrations or testing or as widely used input data. Reference data sets that are used by several project groups will be provided on /scratch
by Pawsey to avoid multiple copies existing. These data sets will be contained in subdirectories of /scratch/references.
Examples include:
/scratch/references/askap
/scratch/references/mwa
/scratch/references/blastdb_update
These reference datasets will be exempt from the /scratch
purge policy.
Some of the bioinformatics reference datasets available are:
- 10x single cell gene expression
- 10x spatial gene expression
- Alphafold
- Arabidopsis thaliana
- Blast+ database (regularly updated)
- Diamond
- Human Broad bundle hg19, Broad bundle hg38, and GRCh38
- Interproscan-5.56-89.0
- Metagenome_atlas_2.9
- Mouse Broad bundle mm10, NCBI MM10, UCSC GRCm38, RNA M25
- Qiime
- Sarek
- VEP
For more information, see the Life Science and Bioinformatics page.
If you would like to request addition of a new reference dataset, please email the Pawsey Helpdesk help@pawsey.org.au
File permissions and quota
The effect of file permissions and ownership on storage quotas varies depending on which filesystem the data is located. The default behaviour can be summarised as such:
- Files created in a user's
/home
are accessible only to that user. - Files created in a user's
/software and
/scratch
directories are accessible only to that user and to members of the same project.
For more detail on these filesystems refer to Filesystem Policies. The filesystem quotas are summarised in table 1:
Column | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Table 1. Pawsey filesystems: capacity, file limit and duration
|
The default group membership for files and directories that are created in /home
is the user's primary group, which is the same as their user ID. Files and directories that are created in any of the Lustre filesystems are associated by default with the user's project ID.
For the /software
filesystem, Pawsey uses a file's group ownership to calculate its effect on storage quotas. To make use of the group quota for a project, files must be associated with the group corresponding to that project ID.
A user is always a member of their own primary group (which is the same as their own username) and can also be a member of more than one project. This is important to know because files created with a group associated to a username rather than a project are limited to a default quota of 1GB and there can be at most 100 of them.
Column | ||||
---|---|---|---|---|
| ||||
|
File permissions and ownerships are also important to consider. The default permissions of files created by a user on any of the Setonix filesystems is the same, but the default ownerships are different. An example of default properties of a file in /home filesystem is as below:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
Note that there are 10 characters describing the permissions. The first character is not really a permission, but an indication of the type "file". So a "-
" in the first character indicates that myscript.sh
is indeed a file. (A "d"
would indicate it is not a file but a directory, an "l
" would indicate it is a link, etc.). The rest nine characters indicate the permissions of the file, and these permissions are broken down into three groups of three:
- | The - in the first character indicates this is a file. |
rwx | The first set of permissions determines what actions can be performed by the owner of the file. In this case |
r-- | The second set of permissions determines what actions can be performed by other users who belong to the same group as the file. The group here is the primary or default group of the file's owner, which is |
r-- | The final set of permissions apply to all other users. While the permissions are set to read, the top-level user directory is locked to just the user or project, so others are not able to read, write, or execute files in another group's directories. |
An example of default properties of a file in /software
filesystem is as below:
Column | |||||||||
---|---|---|---|---|---|---|---|---|---|
| |||||||||
|
So, as said, default permissions are the same, but not the default ownerships (the next two words after the permissions).
The next two words after the permissions are, respectively, the owner-name and the group-name of the file. In the case of files created in /home
, both the owner and the group are assigned to the username
by default. And in the case of files created in /software
and /scratch
, the default for the group-name is the projectgroup
for your project.
Column | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||
|
File transfer programs like WinSCP can also cause issues with permissions and groups. You should consult the documentation of your preferred transfer program. rsync
users should avoid using the -a
and -p
flags; these flags will preserve permissions of the source files, which may conflict with the default behaviour on Pawsey systems. Some additional information about file transfer programs is at: Transferring Files in/out Pawsey Filesystems.
Pawsey provides a tool that lets you fix file and directory ownerships on /software
. The fix.group.permission.sh
script is available in the pawseytools module, which is loaded by default. To use it, enter the script name followed by your group name. For example, if your project ID is projectgroup
you would enter this:
$ fix.group.permission.sh projectgroup
Column | |||||
---|---|---|---|---|---|
|
There is a manual way of doing this in your own area using the find
command. Replace projectgroup
with your project ID and username
with your user name.
Column | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||
|
Related pages
- Resource Overview
- How to Manually Build Software
- Managing Files with Singularity Overlays
- How to avoid Conda breaking your file quota
- How to Configure Conda to Avoid Quota Issues
External links
- Lustre home page