[Trillian-users] /mnt/lustre/lus0 full

Tom Lippmann lippmann at ccom.unh.edu
Tue Jun 12 16:56:00 EDT 2018


Dear Trillian users,

Looks like /mnt/lustre/lus0 is very nearly full, to the point a couple of my jobs failed due to disk i/o errors.  If anyone can tighten up their disk usage (e.g., move or delete unneeded large files) to make room for new runs, it would be greatly appreciated.

Thank you,
Tom
  

Filesystem                                        Size  Used Avail Use% Mounted on
rootfs                                            178G   43G  126G  26% /
initramdevs                                       8.5G   70k  8.5G   1% /dev
10.131.255.254:/rr/current                        178G   43G  126G  26% /
10.131.255.254:/rr/current//.shared/node/245/etc  178G   43G  126G  26% /etc
tmpfs                                             8.5G     0  8.5G   0% /dev/shm
10.131.255.254:/snv/245/var                        50G   12G   36G  25% /var
none                                              8.5G  4.1k  8.5G   1% /var/lock
none                                              8.5G  1.1M  8.5G   1% /var/run
none                                              8.5G  4.1k  8.5G   1% /var/tmp
tmpfs                                             8.5G  128M  8.4G   2% /tmp
ufs:/ufs                                           40G   30G  7.5G  80% /ufs
condor:/raid1                                     133T   79T   54T  60% /nfs/condor/raid1
30 at gni:/lus0                                      178T  167T  1.6T 100% /mnt/lustre/lus0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.sr.unh.edu/pipermail/trillian-users/attachments/20180612/c922ddbf/attachment.html>


More information about the Trillian-users mailing list