Skip to content

ufs18 File Systems

The ufs18 file system is a 2018 IBM General Parallel File System (GPFS) installed in HPCC for storing our home or research spaces. While it is faster and more stable, users need to learn its differences from other file systems and understand how to use it.

Space quota

The only way to get quota information of ufs18 file space is to run the command quota :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
$ quota
Home Directory:             Space        Space        Space        Space        Files        Files        Files        Files
                            Quota        Used         Available    % Used       Quota        Used         Available    % Used
-----------------------------------------------------------------------------------------------------------------------------------
/mnt/home/UserName          1024G        1213G        -198G        118%         1048576      540389       508187       52%


Research Groups:            Space        Space        Space        Space        Files        Files        Files        Files
                            Quota        Used         Available    % Used       Quota        Used         Available    % Used
-----------------------------------------------------------------------------------------------------------------------------------
ResearchSpace1              2048G        1778G        270G         87%          2097152      432525       1664627      21%
ResearchSpace2              1024G        126G         898G         12%          1048576      11755        1036821      1%


Temporary Filesystems:
-----------------------------------------------------------------------------------------------------------------------------------

/mnt/scratch (/mnt/gs18)    Space Quota  Space Used   Space Free   Space % Used Filess Quota Files Used   Files Free   Files % Used
                            51200G         64G          51136G          0%          1048576      5724         1042852      1%

where all file spaces accessible to the user are listed, including home, research, and scratch. In each space, the information of quota, usage and availablility on space size and number of files can be found. If "Free" or "Available" is a negative value (such as "Space Available" in home directory of the above example), the usage is over the quota. Please remove, transfer or compress some files so the used space or the file count can be lower than the "Quota" value. Since GPFS uses a different compression algorithm, you may notice higher space size after files are copied to the ufs18 file system.

Actual disk usage different from quota results

The new file system has a smallest file block size of 64K. This means that files between 2K and 64K will occupy 64K of space. This causes space usage inflated greatly for users with large amounts of small files. One suggested solution would be to compress many small files into one large file (at least larger than 64K). If you still have any difficulty, a temporarily larger quota can be requested by a user if their quota is at 1T with many small files.

Limit on number of files

Besides the quota on the size of space, users are also limited to 1 million files in their home or research directory. We need to set this limit because, with a great number of files, the file system will spend too much time on back-up to be able to function normally. If possible, users should compress many files into one to reduce the number of files.

If users do not wish to have the limit, they can request an extra space under /mnt/ufs18/nodr/ or /mnt/ufs18/nodr/research/ where there is no limit on the file count yet no back-up on the files either. Users will be responsible for their own back-up on the nodr space.

By default, one half of the allowed quota of a space size will be assigned to the requested nodr space. The quota of the original space under /mnt/home/ or /mnt/research/ is then downsized to half so the total disk space quota remains the same. A different proportion of their sizes can also be requested. Once this nodr space is created, the path and the quota information can be found from the results of the quota command mentioned above.

Quota setting on research space

The quota setting on research space is based on the group ownership of the files. Any files with the group ownership the same as the research space are followed by the quota command. However, any files (larger than 8 MB) with a group ownership different from the research space are not allowed to exist.

Due to this reason, even though there is no over-quota issue from the results of quota command, users might still get an error message such as “failed to ... ... Disk quota exceeded” when they create, copy or write a file to his research space. To resolve this "Disk quota exceeded" problem, users may do the following:

  1. Make sure the directory to which files are copied has the same group ownership as the research space and has the set-group-ID bit.

    For example, you get the error message when trying to transfer files from your local computer to a directory Drctry in your research space /mnt/research/Group. Use ls -ld command to check the group ownership and the access permission of the directory Drctry and the research space Group :

    1
    2
    3
    $ ls -ld /mnt/research/Group/Drctry /mnt/research/Group
    drwxrwx--- 2 UserName Prmry 5464 Feb 27 11:34 /mnt/research/Group/Drctry
    drwxrws--- 9 UserName Group 8192 Jul 10 15:34 /mnt/research/Group
    

    where you can see their differences. For the group ownership, Prmry of the directory Drctry is different from Group of the research space /mnt/research/Group. For the permission, rwxrwx--- of the directory does not have the set-group-ID bit rwxrws--- as the research space. The owner UserName of the directory can run the commands:

    1
    2
    $ chgrp -R Group /mnt/research/Group/Drctry     # Change the group ownership to Group
    $ chmod g+s /mnt/research/Group/Drctry          # Set up set-group-ID bit
    

    to change the two attributes of the directory. (A further instruction about file permission can be reviewed from the wiki page File Permissions on HPCC.) Once the settings are corrected:

    1
    2
    $ ls -ld /mnt/research/Group/Drctry
    drwxrws--- 2 UserName Group 5464 Feb 27 11:34 /mnt/research/Group/Drctry
    

    the file transfer can proceed to the directory. 

  2. If the file already exists, its group ownership needs to be changed to the group of the research space.

    For example, you try running a command to copy, transfer or write a file foo to a directory Drctry of your research space. However, a file with the same file name foo already exists in the directory Drctry. In order to make the command work, foo in the directory Drctry must have the same group ownership as the research space. Otherwise, the owner of the file can use chgrp command mentioned above to correct the group ownership or you have to rename or remove foo in the directory Drctry.

  3. If the file is going to be created, user's primary group may need to be set to the group of the research space.

    Users can use newgrp command to reset his primary group temporarily. For more information, please refer to Change Primary Group page.

Samba mapping path for local computer

Users mounting their home directories or research spaces will need to update their SMB/Samba/Windows File Sharing paths in their clients. To determine the mount path, log into HPCC, ssh to a development node and run:

1
show-samba-home

To determine the mount point for your research space, use the following command:

1
show-samba-research researchspacename

You may also use the powertools command to see all paths of your home and research spaces:

1
2
ml powertools                    # if powertools is not loaded
show-samba-paths
To map your home or research directory, please refer to the Mapping HPC Drives with Samba page.

ACL for GPFS

If you are using access control list (ACL), you will need to update them to NFSv4 ACLs. You will need to use the mmgetacl, mmputacl, and mmeditacl commands. Please refer to the GPFS Commands page for more details.