-----============= acceptance-small: sanity-quota ============----- Fri Apr 19 08:49:55 EDT 2024 excepting tests: 2 4a 63 65 skipping tests SLOW=no: 61 oleg257-server: debugfs 1.46.2.wc5 (26-Mar-2022) pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 === sanity-quota: start setup 08:49:58 (1713530998) === oleg257-client.virtnet: executing check_config_client /mnt/lustre oleg257-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg257-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b22f8000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b22f8000.idle_timeout=debug oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 === sanity-quota: finish setup 08:50:05 (1713531005) === using SAVE_PROJECT_SUPPORTED=0 oleg257-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg257-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg257-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg257-server: debugfs 1.46.2.wc5 (26-Mar-2022) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [true] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d0_runas_test/f7340] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [true] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [touch] [/mnt/lustre/d0_runas_test/f7340] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 0: Test basic quota performance ===== 08:50:16 (1713531016) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.307581 s, 34.1 MB/s Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.354927 s, 29.5 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 0 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1a: Block hard limit (normal use and out of quota) ========================================================== 08:50:37 (1713531037) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.155474 s, 33.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.144543 s, 36.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.044656 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.149421 s, 35.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.134024 s, 39.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0389958 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:10 mb) lfs project -p 1000 /mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.133022 s, 39.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.129065 s, 40.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0409189 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1a (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1b: Quota pools: Block hard limit (normal use and out of quota) ========================================================== 08:51:46 (1713531106) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.129861 s, 40.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.113225 s, 46.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0347538 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:20 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.115833 s, 45.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.10894 s, 48.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0375353 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.130799 s, 40.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.123697 s, 42.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0415699 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1b (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1c: Quota pools: check 3 pools with hardlimit only for global ========================================================== 08:53:01 (1713531181) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Waiting 90s for '' Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.220945 s, 47.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.233411 s, 44.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0701488 s, 0.0 kB/s qpool1 used 20484 qpool2 used 20484 Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1c (51s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1d: Quota pools: check block hardlimit on different pools ========================================================== 08:53:53 (1713531233) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.132884 s, 39.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.126536 s, 41.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.045041 s, 0.0 kB/s qpool1 used 10240 qpool2 used 10240 Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1d (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1e: Quota pools: global pool high block limit vs quota pool with small ========================================================== 08:54:44 (1713531284) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:53000000 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.132799 s, 39.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.143057 s, 36.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0401978 s, 0.0 kB/s Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-1] [count=20] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.448433 s, 46.8 MB/s Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1e (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1f: Quota pools: correct qunit after removing/adding OST ========================================================== 08:55:20 (1713531320) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.138019 s, 38.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.13578 s, 38.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0410666 s, 0.0 kB/s Removing lustre-OST0000_UUID from qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Waiting for MDT destroys to complete Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.153714 s, 34.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.138921 s, 37.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0436607 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1f (55s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1g: Quota pools: Block hard limit with wide striping ========================================================== 08:56:17 (1713531377) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 osc.lustre-OST0000-osc-ffff8800b22f8000.max_dirty_mb=1 osc.lustre-OST0001-osc-ffff8800b22f8000.max_dirty_mb=1 User quota (block hardlimit:40 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.27086 s, 8.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.35904 s, 7.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=6] [seek=20] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0390884 s, 0.0 kB/s Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed osc.lustre-OST0000-osc-ffff8800b22f8000.max_dirty_mb=467 osc.lustre-OST0001-osc-ffff8800b22f8000.max_dirty_mb=467 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1g (51s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1h: Block hard limit test using fallocate ========================================================== 08:57:10 (1713531430) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write 5MiB Using Fallocate running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l5MiB] [/mnt/lustre/d1h.sanity-quota/f1h.sanity-quota-0] Write 11MiB Using Fallocate running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l11MiB] [/mnt/lustre/d1h.sanity-quota/f1h.sanity-quota-0] fallocate: fallocate failed: Disk quota exceeded Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1h (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1i: Quota pools: different limit and usage relations ========================================================== 08:57:29 (1713531449) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.129436 s, 40.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.122447 s, 42.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0559278 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10240 0 0 - 1 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 10240* - 10240 - - - - - Total allocated inode limit: 0, total allocated block limit: 10240 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.157761 s, 33.2 MB/s Waiting for MDT destroys to complete Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.131346 s, 39.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.113024 s, 46.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0411937 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.10131 s, 31.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.085304 s, 36.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [seek=3] [count=1] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0403296 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1i (52s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1j: Enable project quota enforcement for root ========================================================== 08:58:23 (1713531503) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0 osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=1 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.465059 s, 42.8 MB/s running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=10] [seek=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0244713 s, 0.0 kB/s osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [seek=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.478043 s, 43.9 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1j (22s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_2 skipping excluded test 2 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3a: Block soft limit (start timer, timer goes off, stop timer) ========================================================== 08:58:47 (1713531527) User quota (soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.111057 s, 37.8 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00920932 s, 1.1 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148* 4096 0 19s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00359961 s, 2.8 MB/s Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.543114 s, 7.7 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00356413 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8264 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 4096 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.106968 s, 39.2 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Group quota (soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.124698 s, 33.6 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0178239 s, 575 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148* 4096 0 19s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00641285 s, 1.6 MB/s Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4168 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4168 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00490758 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00458814 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4168 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4168 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 4096 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 1064 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1064 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.117111 s, 35.8 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Project quota (soft limit:4 MB grace:20 sec) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.111287 s, 37.7 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00915571 s, 1.1 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4108* 4096 0 19s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00406226 s, 2.5 MB/s Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.54485 s, 7.7 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00390949 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8216* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8216 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 4096 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.104828 s, 40.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 3a (153s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3b: Quota pools: Block soft limit (start timer, expires, stop timer) ========================================================== 09:01:22 (1713531682) limit 4 glbl_limit 8 grace 20 glbl_grace 40 User quota in qpool1(soft limit:4 MB grace:20 seconds) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0830953 s, 50.5 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00810243 s, 1.3 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0034186 s, 3.0 MB/s Quota info for qpool1: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 19s 2 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4168 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4168 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00787398 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00601978 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4168 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4168 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 1064 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1064 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.130536 s, 32.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Group quota in qpool1(soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0992743 s, 42.2 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0110047 s, 931 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4176 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4224 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00390003 s, 2.6 MB/s Quota info for qpool1: Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 19s 2 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4176 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4224 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.532551 s, 7.9 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00736873 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 38s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8264 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.162243 s, 25.9 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Project quota in qpool1(soft:4 MB grace:20 sec) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.121461 s, 34.5 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0127733 s, 802 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4108 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00431291 s, 2.4 MB/s Quota info for qpool1: Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120* 4096 0 19s 1 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.536795 s, 7.8 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0047028 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8216* 8192 0 38s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8216 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.155161 s, 27.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed PASS 3b (172s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3c: Quota pools: check block soft limit on different pools ========================================================== 09:04:16 (1713531856) limit 4 limit2 8 glbl_limit 12 grace1 30 grace2 20 glbl_grace 40 User quota in qpool2(soft:8 MB grace:20 seconds) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.233919 s, 35.9 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=8192] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00873517 s, 1.2 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8244 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8204 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8244 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8204 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=9216] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00446938 s, 2.3 MB/s Quota info for qpool2: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 19s 2 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=4096] [seek=10240] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00844596 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=14336] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00533204 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 12288 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.225807 s, 37.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed PASS 3c (80s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_4a skipping excluded test 4a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 4b: Grace time strings handling ===== 09:05:38 (1713531938) Waiting for MDT destroys to complete Valid grace strings test Block grace time: 1w3d; Inode grace time: 16m40s Block grace time: 5s; Inode grace time: 1w2d3h4m5s Invalid grace strings test lfs: bad inode-grace: 5c setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: 18446744073709551615 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: -1 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 4b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 5: Chown & chgrp successfully even out of block/file quota ========================================================== 09:05:43 (1713531943) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set quota limit (0 10M 0 10) for quota_usr.quota_usr lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Create more than 10 files and more than 10 MB ... total: 11 create in 0.04 seconds: 288.39 ops/second lfs project -p 1000 /mnt/lustre/d5.sanity-quota/f5.sanity-quota-0_1 11+0 records in 11+0 records out 11534336 bytes (12 MB) copied, 0.228294 s, 50.5 MB/s Chown files to quota_usr.quota_usr ... - unlinked 0 (time 1713531955 ; total 0 ; last 0) total: 11 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 5 (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 6: Test dropping acquire request on master ========================================================== 09:06:09 (1713531969) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0645078 s, 16.3 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.038577 s, 27.2 MB/s at_max=20 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 3+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.168579 s, 12.4 MB/s Waiting for MDT destroys to complete fail_val=601 fail_loc=0x513 osd-ldiskfs.lustre-OST0000.quota_slave.timeout=10 osd-ldiskfs.lustre-OST0001.quota_slave.timeout=10 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.264716 s, 11.9 MB/s Sleep for 41 seconds ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] at_max=600 fail_val=0 fail_loc=0 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 55.3262 s, 56.9 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 6 (86s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7a: Quota reintegration (global index) ========================================================== 09:07:37 (1713532057) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg257-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Start ost1... Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.363976 s, 14.4 MB/s Waiting for MDT destroys to complete Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg257-server Start ost1... Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.444074 s, 14.2 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7a (68s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7b: Quota reintegration (slave index) ========================================================== 09:08:47 (1713532127) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0899417 s, 11.7 MB/s fail_val=0 fail_loc=0xa02 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [seek=1] [oflag=sync] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0962063 s, 10.9 MB/s fail_val=0 fail_loc=0 Restart ost to trigger reintegration... Stopping /mnt/lustre-ost1 (opts:) on oleg257-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7b (46s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7c: Quota reintegration (restart mds during reintegration) ========================================================== 09:09:35 (1713532175) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' fail_val=0 fail_loc=0xa03 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 osd-ldiskfs.lustre-OST0001.quota_slave.force_reint=1 Stop mds... Stopping /mnt/lustre-mds1 (opts:) on oleg257-server fail_val=0 fail_loc=0 Start mds... Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE Waiting 200s for 'glb[1],slv[1],reint[0]' Waiting 180s for 'glb[1],slv[1],reint[0]' Waiting 160s for 'glb[1],slv[1],reint[0]' Waiting 140s for 'glb[1],slv[1],reint[0]' Waiting 120s for 'glb[1],slv[1],reint[0]' Waiting 110s for 'glb[1],slv[1],reint[0]' Updated after 110s: want 'glb[1],slv[1],reint[0]' got 'glb[1],slv[1],reint[0]' affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.410822 s, 12.8 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7c (146s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7d: Quota reintegration (Transfer index in multiple bulks) ========================================================== 09:12:03 (1713532323) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' fail_val=0 fail_loc=0x608 Waiting 90s for 'u' affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.16167 s, 18.1 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1': Disk quota exceeded 20+0 records in 19+0 records out 20963328 bytes (21 MB) copied, 2.41399 s, 8.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7d (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7e: Quota reintegration (inode limits) ========================================================== 09:12:29 (1713532349) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Stop mds2... Stopping /mnt/lustre-mds2 (opts:) on oleg257-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Start mds2... Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg257-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg257-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg257-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg257-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg257-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg257-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg257-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg257-server: *.lustre-MDT0001.recovery_status status: COMPLETE create remote dir running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] mknod(/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota2048) error: Disk quota exceeded total: 2048 create in 7.38 seconds: 277.64 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2048] - unlinked 0 (time 1713532387 ; total 0 ; last 0) total: 2048 unlinks in 15 seconds: 136.533340 unlinks/second Waiting for MDT destroys to complete Stop mds2... Stopping /mnt/lustre-mds2 (opts:) on oleg257-server Start mds2... Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg257-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg257-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg257-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg257-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg257-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg257-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg257-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg257-server: *.lustre-MDT0001.recovery_status status: COMPLETE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] total: 2049 create in 7.50 seconds: 273.33 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] - unlinked 0 (time 1713532431 ; total 0 ; last 0) total: 2049 unlinks in 16 seconds: 128.062500 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7e (103s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 8: Run dbench with quota enabled ==== 09:14:14 (1713532454) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Set enough high limit for user: quota_usr Set enough high limit for group: quota_usr lfs project -sp 1000 /mnt/lustre/d8.sanity-quota Set enough high limit for project: 1000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [bash] [rundbench] [-D] [/mnt/lustre/d8.sanity-quota] [3] [-t] [120] looking for dbench program /usr/bin/dbench found dbench client file /usr/share/dbench/client.txt '/usr/share/dbench/client.txt' -> 'client.txt' running 'dbench 3 -t 120' on /mnt/lustre/d8.sanity-quota at Fri Apr 19 09:14:21 EDT 2024 waiting for dbench pid 25216 dbench version 4.00 - Copyright Andrew Tridgell 1999-2004 Running for 120 seconds with load 'client.txt' and minimum warmup 24 secs failed to create barrier semaphore 1 of 3 processes prepared for launch 0 sec 3 of 3 processes prepared for launch 0 sec releasing clients 3 276 31.07 MB/sec warmup 1 sec latency 19.869 ms 3 524 27.21 MB/sec warmup 2 sec latency 21.534 ms 3 922 21.74 MB/sec warmup 3 sec latency 20.754 ms 3 1399 19.51 MB/sec warmup 4 sec latency 20.478 ms 3 2135 16.02 MB/sec warmup 5 sec latency 16.826 ms 3 2452 14.15 MB/sec warmup 6 sec latency 21.073 ms 3 2903 14.14 MB/sec warmup 7 sec latency 21.442 ms 3 3345 13.28 MB/sec warmup 8 sec latency 23.061 ms 3 3781 13.25 MB/sec warmup 9 sec latency 15.358 ms 3 3972 12.02 MB/sec warmup 10 sec latency 21.817 ms 3 4204 10.95 MB/sec warmup 11 sec latency 20.213 ms 3 4653 10.51 MB/sec warmup 12 sec latency 63.720 ms 3 5029 10.42 MB/sec warmup 13 sec latency 36.881 ms 3 5506 9.76 MB/sec warmup 14 sec latency 24.743 ms 3 6010 9.48 MB/sec warmup 15 sec latency 21.033 ms 3 6458 9.78 MB/sec warmup 16 sec latency 21.078 ms 3 7105 10.15 MB/sec warmup 17 sec latency 14.942 ms 3 7400 9.81 MB/sec warmup 18 sec latency 21.587 ms 3 7825 9.39 MB/sec warmup 19 sec latency 26.542 ms 3 8311 9.19 MB/sec warmup 20 sec latency 19.679 ms 3 8837 9.21 MB/sec warmup 21 sec latency 22.921 ms 3 9453 9.01 MB/sec warmup 22 sec latency 22.553 ms 3 9700 8.77 MB/sec warmup 23 sec latency 21.984 ms 3 10721 13.83 MB/sec execute 1 sec latency 19.774 ms 3 10952 8.63 MB/sec execute 2 sec latency 20.613 ms 3 11132 6.06 MB/sec execute 3 sec latency 21.520 ms 3 11393 4.80 MB/sec execute 4 sec latency 58.990 ms 3 11689 4.59 MB/sec execute 5 sec latency 22.329 ms 3 12040 5.08 MB/sec execute 6 sec latency 20.672 ms 3 12504 4.91 MB/sec execute 7 sec latency 29.481 ms 3 13082 4.94 MB/sec execute 8 sec latency 19.903 ms 3 13491 5.88 MB/sec execute 9 sec latency 22.897 ms 3 13980 6.15 MB/sec execute 10 sec latency 22.939 ms 3 14455 6.78 MB/sec execute 11 sec latency 27.091 ms 3 14627 6.29 MB/sec execute 12 sec latency 20.640 ms 3 14856 5.85 MB/sec execute 13 sec latency 67.779 ms 3 15140 5.56 MB/sec execute 14 sec latency 31.102 ms 3 15537 5.85 MB/sec execute 15 sec latency 21.050 ms 3 16284 5.79 MB/sec execute 16 sec latency 20.960 ms 3 16752 5.84 MB/sec execute 17 sec latency 20.649 ms 3 17245 6.34 MB/sec execute 18 sec latency 25.338 ms 3 17687 6.58 MB/sec execute 19 sec latency 18.863 ms 3 17997 6.60 MB/sec execute 20 sec latency 21.091 ms 3 18219 6.33 MB/sec execute 21 sec latency 20.215 ms 3 18548 6.10 MB/sec execute 22 sec latency 34.503 ms 3 18940 6.05 MB/sec execute 23 sec latency 16.967 ms 3 19354 6.19 MB/sec execute 24 sec latency 21.956 ms 3 20061 6.11 MB/sec execute 25 sec latency 24.455 ms 3 20756 6.54 MB/sec execute 26 sec latency 17.163 ms 3 21250 6.75 MB/sec execute 27 sec latency 20.916 ms 3 21573 6.76 MB/sec execute 28 sec latency 21.187 ms 3 21745 6.57 MB/sec execute 29 sec latency 20.232 ms 3 21975 6.36 MB/sec execute 30 sec latency 21.662 ms 3 22546 6.35 MB/sec execute 31 sec latency 13.759 ms 3 23108 6.45 MB/sec execute 32 sec latency 28.538 ms 3 23687 6.39 MB/sec execute 33 sec latency 21.909 ms 3 24102 6.50 MB/sec execute 34 sec latency 21.779 ms 3 24527 6.62 MB/sec execute 35 sec latency 20.332 ms 3 24950 6.76 MB/sec execute 36 sec latency 19.680 ms 3 25292 6.70 MB/sec execute 37 sec latency 15.028 ms 3 25510 6.53 MB/sec execute 38 sec latency 44.344 ms 3 25779 6.41 MB/sec execute 39 sec latency 25.304 ms 3 26078 6.35 MB/sec execute 40 sec latency 25.738 ms 3 26491 6.42 MB/sec execute 41 sec latency 27.506 ms 3 27075 6.34 MB/sec execute 42 sec latency 27.066 ms 3 27372 6.30 MB/sec execute 43 sec latency 25.411 ms 3 27795 6.45 MB/sec execute 44 sec latency 22.339 ms 3 28279 6.54 MB/sec execute 45 sec latency 20.118 ms 3 28710 6.61 MB/sec execute 46 sec latency 21.360 ms 3 28879 6.49 MB/sec execute 47 sec latency 23.013 ms 3 29119 6.37 MB/sec execute 48 sec latency 61.031 ms 3 29408 6.30 MB/sec execute 49 sec latency 41.890 ms 3 29775 6.35 MB/sec execute 50 sec latency 21.421 ms 3 30219 6.30 MB/sec execute 51 sec latency 27.049 ms 3 30801 6.27 MB/sec execute 52 sec latency 20.401 ms 3 31151 6.33 MB/sec execute 53 sec latency 22.126 ms 3 31587 6.40 MB/sec execute 54 sec latency 20.291 ms 3 32044 6.51 MB/sec execute 55 sec latency 20.448 ms 3 32269 6.46 MB/sec execute 56 sec latency 22.628 ms 3 32497 6.36 MB/sec execute 57 sec latency 17.953 ms 3 32777 6.28 MB/sec execute 58 sec latency 59.807 ms 3 33096 6.25 MB/sec execute 59 sec latency 20.732 ms 3 34012 6.32 MB/sec execute 60 sec latency 14.751 ms 3 34495 6.32 MB/sec execute 61 sec latency 20.742 ms 3 34991 6.46 MB/sec execute 62 sec latency 21.237 ms 3 35457 6.54 MB/sec execute 63 sec latency 19.910 ms 3 35766 6.55 MB/sec execute 64 sec latency 20.631 ms 3 35975 6.46 MB/sec execute 65 sec latency 21.290 ms 3 36231 6.37 MB/sec execute 66 sec latency 32.394 ms 3 36527 6.33 MB/sec execute 67 sec latency 23.600 ms 3 36886 6.35 MB/sec execute 68 sec latency 22.082 ms 3 37435 6.32 MB/sec execute 69 sec latency 20.462 ms 3 37978 6.31 MB/sec execute 70 sec latency 19.538 ms 3 38513 6.43 MB/sec execute 71 sec latency 20.822 ms 3 38999 6.52 MB/sec execute 72 sec latency 19.932 ms 3 39343 6.53 MB/sec execute 73 sec latency 20.059 ms 3 39525 6.45 MB/sec execute 74 sec latency 21.348 ms 3 39782 6.37 MB/sec execute 75 sec latency 45.522 ms 3 40138 6.34 MB/sec execute 76 sec latency 21.327 ms 3 40642 6.40 MB/sec execute 77 sec latency 22.780 ms 3 41281 6.35 MB/sec execute 78 sec latency 19.771 ms 3 41664 6.38 MB/sec execute 79 sec latency 21.769 ms 3 42109 6.45 MB/sec execute 80 sec latency 25.959 ms 3 42582 6.51 MB/sec execute 81 sec latency 20.224 ms 3 42873 6.51 MB/sec execute 82 sec latency 21.865 ms 3 43050 6.44 MB/sec execute 83 sec latency 21.557 ms 3 43303 6.37 MB/sec execute 84 sec latency 54.767 ms 3 43591 6.33 MB/sec execute 85 sec latency 29.598 ms 3 44032 6.36 MB/sec execute 86 sec latency 21.478 ms 3 44591 6.34 MB/sec execute 87 sec latency 23.609 ms 3 45019 6.32 MB/sec execute 88 sec latency 23.768 ms 3 45467 6.40 MB/sec execute 89 sec latency 19.461 ms 3 45903 6.41 MB/sec execute 90 sec latency 21.705 ms 3 46299 6.47 MB/sec execute 91 sec latency 21.190 ms 3 46498 6.43 MB/sec execute 92 sec latency 20.928 ms 3 46685 6.36 MB/sec execute 93 sec latency 22.949 ms 3 47004 6.32 MB/sec execute 94 sec latency 49.854 ms 3 47305 6.29 MB/sec execute 95 sec latency 21.087 ms 3 47704 6.33 MB/sec execute 96 sec latency 24.718 ms 3 48235 6.28 MB/sec execute 97 sec latency 24.590 ms 3 48602 6.26 MB/sec execute 98 sec latency 20.995 ms 3 49034 6.34 MB/sec execute 99 sec latency 29.975 ms 3 49519 6.38 MB/sec execute 100 sec latency 25.263 ms 3 49891 6.41 MB/sec execute 101 sec latency 18.363 ms 3 50092 6.36 MB/sec execute 102 sec latency 21.825 ms 3 50345 6.31 MB/sec execute 103 sec latency 69.712 ms 3 50623 6.27 MB/sec execute 104 sec latency 26.338 ms 3 50918 6.25 MB/sec execute 105 sec latency 22.859 ms 3 51652 6.28 MB/sec execute 106 sec latency 21.879 ms 3 52097 6.27 MB/sec execute 107 sec latency 21.058 ms 3 52436 6.30 MB/sec execute 108 sec latency 23.313 ms 3 52875 6.33 MB/sec execute 109 sec latency 21.276 ms 3 53292 6.38 MB/sec execute 110 sec latency 20.008 ms 3 53536 6.36 MB/sec execute 111 sec latency 23.154 ms 3 53709 6.31 MB/sec execute 112 sec latency 22.358 ms 3 53947 6.26 MB/sec execute 113 sec latency 41.315 ms 3 54233 6.23 MB/sec execute 114 sec latency 21.921 ms 3 54600 6.25 MB/sec execute 115 sec latency 22.468 ms 3 55069 6.23 MB/sec execute 116 sec latency 22.628 ms 3 55573 6.21 MB/sec execute 117 sec latency 21.897 ms 3 55895 6.22 MB/sec execute 118 sec latency 27.142 ms 3 56367 6.27 MB/sec execute 119 sec latency 20.601 ms 3 cleanup 120 sec 0 cleanup 121 sec Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 24135 5.472 40.605 Close 17748 0.665 9.975 Rename 1024 14.244 41.867 Unlink 4873 6.395 37.886 Qpathinfo 21873 2.577 26.317 Qfileinfo 3809 0.550 3.647 Qfsinfo 4074 8.358 49.838 Sfileinfo 1950 9.003 37.910 Find 8555 1.090 49.869 WriteX 11947 2.497 27.085 ReadX 38185 0.083 1.400 LockX 78 2.009 4.779 UnlockX 78 2.164 3.360 Flush 1693 9.674 69.704 Throughput 6.27454 MB/sec 3 clients 3 procs max_latency=69.712 ms stopping dbench on /mnt/lustre/d8.sanity-quota at Fri Apr 19 09:16:46 EDT 2024 with return code 0 clean dbench files on /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota removed directory: 'clients/client1/~dmtmp/COREL' removed directory: 'clients/client1/~dmtmp/PM' removed directory: 'clients/client1/~dmtmp/EXCEL' removed directory: 'clients/client1/~dmtmp/WORD' removed directory: 'clients/client1/~dmtmp/ACCESS' removed directory: 'clients/client1/~dmtmp/PWRPNT' removed directory: 'clients/client1/~dmtmp/PARADOX' removed directory: 'clients/client1/~dmtmp/WORDPRO' removed directory: 'clients/client1/~dmtmp/SEED' removed directory: 'clients/client1/~dmtmp' removed directory: 'clients/client1' removed directory: 'clients/client0/~dmtmp/COREL' removed directory: 'clients/client0/~dmtmp/PM' removed directory: 'clients/client0/~dmtmp/EXCEL' removed directory: 'clients/client0/~dmtmp/WORD' removed directory: 'clients/client0/~dmtmp/ACCESS' removed directory: 'clients/client0/~dmtmp/PWRPNT' removed directory: 'clients/client0/~dmtmp/PARADOX' removed directory: 'clients/client0/~dmtmp/WORDPRO' removed directory: 'clients/client0/~dmtmp/SEED' removed directory: 'clients/client0/~dmtmp' removed directory: 'clients/client0' removed directory: 'clients/client2/~dmtmp/COREL' removed directory: 'clients/client2/~dmtmp/PM' removed directory: 'clients/client2/~dmtmp/EXCEL' removed directory: 'clients/client2/~dmtmp/WORD' removed directory: 'clients/client2/~dmtmp/ACCESS' removed directory: 'clients/client2/~dmtmp/PWRPNT' removed directory: 'clients/client2/~dmtmp/PARADOX' removed directory: 'clients/client2/~dmtmp/WORDPRO' removed directory: 'clients/client2/~dmtmp/SEED' removed directory: 'clients/client2/~dmtmp' removed directory: 'clients/client2' removed directory: 'clients' removed 'client.txt' /mnt/lustre/d8.sanity-quota dbench successfully finished lfs project -C /mnt/lustre/d8.sanity-quota Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 8 (163s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 9: Block limit larger than 4GB (b10707) ========================================================== 09:16:59 (1713532619) OST0_SIZE: 3598324 required: 4900000 WARN: OST0 has less than 4900000 free, skip this test. PASS 9 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 10: Test quota for root user ======== 09:17:04 (1713532624) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted Waiting 90s for 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 2048 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d10.sanity-quota/f10.sanity-quota] [count=3] [oflag=sync] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.238953 s, 13.2 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 10 (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 11: Chown/chgrp ignores quota ======= 09:17:29 (1713532649) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' lfs setquota: warning: inode hardlimit '1' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 2* 0 1 - lustre-MDT0000_UUID 0 - 0 - 2* - 2 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 2, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 11 (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12a: Block quota rebalancing ======== 09:17:53 (1713532673) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write to ost0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-0] [count=17] [oflag=sync] 17+0 records in 17+0 records out 17825792 bytes (18 MB) copied, 1.10207 s, 16.2 MB/s Write to ost1... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1] [count=17] [oflag=sync] dd: error writing '/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.249513 s, 16.8 MB/s Free space from ost0... Waiting for MDT destroys to complete Write to ost1 after space freed from ost0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1] [count=17] [oflag=sync] 17+0 records in 17+0 records out 17825792 bytes (18 MB) copied, 0.992954 s, 18.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 12a (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12b: Inode quota rebalancing ======== 09:18:24 (1713532704) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 2s: want 'u' got 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Create 2048 files on mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota/f12b.sanity-quota] [2048] total: 2048 create in 7.53 seconds: 271.88 ops/second Create files on mdt1... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1] mknod(/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second Free space from mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d12b.sanity-quota/f12b.sanity-quota] [2048] - unlinked 0 (time 1713532717 ; total 0 ; last 0) total: 2048 unlinks in 16 seconds: 128.000000 unlinks/second Waiting for MDT destroys to complete Create files on mdt1 after space freed from mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1024] total: 1024 create in 3.85 seconds: 266.06 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1024] - unlinked 0 (time 1713532739 ; total 0 ; last 0) total: 1024 unlinks in 7 seconds: 146.285721 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 12b (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 13: Cancel per-ID lock in the LRU list ========================================================== 09:19:13 (1713532753) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d13.sanity-quota/f13.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.100768 s, 10.4 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 13 (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 14: check panic in qmt_site_recalc_cb ========================================================== 09:19:38 (1713532778) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d14.sanity-quota/f14.sanity-quota-0] [count=10] [oflag=direct] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.334387 s, 31.4 MB/s Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg257-server Removing lustre-OST0000_UUID from qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 14 (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 15: Set over 4T block quota ========= 09:20:15 (1713532815) Waiting for MDT destroys to complete PASS 15 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16a: lfs quota should skip the inactive MDT/OST ========================================================== 09:20:24 (1713532824) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d16a.sanity-quota/f16a.sanity-quota] [count=50] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.04179 s, 50.3 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 1024 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 1024, total allocated block limit: 65536 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 0, total allocated block limit: 65536 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 1024 - lustre-MDT0001_UUID[inact] [0] - [0] - [0] - [0] - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 1024, total allocated block limit: 65536 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID[inact] [0] - [0] - [0] - [0] - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 0, total allocated block limit: 65536 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 16a (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16b: lfs quota should skip the nonexistent MDT/OST ========================================================== 09:20:38 (1713532838) SKIP: sanity-quota test_16b needs >= 3 MDTs SKIP 16b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 17: DQACQ return recoverable error == 09:20:41 (1713532841) DQACQ return -ENOLCK Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=37 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.07389 s, 341 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -EAGAIN Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=11 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.04965 s, 344 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -ETIMEDOUT Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=110 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.06473 s, 342 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -ENOTCONN Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=107 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.61812 s, 290 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 17 (102s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 18: MDS failover while writing, no watchdog triggered (b14840) ========================================================== 09:22:25 (1713532945) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 2s: want 'u' got 'u' User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (buffered) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2836 1284852 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1904 1285784 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1600 3596596 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3124 7202092 1% /mnt/lustre Fail mds for 40 seconds 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 2.22883 s, 47.0 MB/s Failing mds1 on oleg257-server Stopping /mnt/lustre-mds1 (opts:) on oleg257-server 09:22:37 (1713532957) shut down Failover mds1 to oleg257-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 09:22:51 (1713532971) targets are mounted 09:22:51 (1713532971) facet_failover done oleg257-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec (dd_pid=17839, time=0, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102400 0 204800 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 102400 - 114688 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 114688 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (directio) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] [oflag=direct] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2456 1285232 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1904 1285784 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 3648 3596036 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 5172 7201532 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg257-server Stopping /mnt/lustre-mds1 (opts:) on oleg257-server 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 2.84135 s, 36.9 MB/s 09:23:10 (1713532990) shut down Failover mds1 to oleg257-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 09:23:25 (1713533005) targets are mounted 09:23:25 (1713533005) facet_failover done oleg257-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec (dd_pid=20128, time=0, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102400 0 204800 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 102400 - 109568 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 109568 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 18 (77s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 19: Updating admin limits doesn't zero operational limits(b14790) ========================================================== 09:23:44 (1713533024) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set user quota (limit: 5M) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Update quota limits Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.199448 s, 26.3 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5120* 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5120* - 5120 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 5120 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] [seek=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0499997 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5120* 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5120* - 5120 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 5120 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 19 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 20: Test if setquota specifiers work properly (b15754) ========================================================== 09:24:05 (1713533045) PASS 20 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 21: Setquota while writing & deleting (b16053) ========================================================== 09:24:14 (1713533054) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set limit(block:10G; file:1000000) for user: quota_usr Set limit(block:10G; file:1000000) for group: quota_usr lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set limit(block:10G; file:) for project: 1000 lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set quota for 1 times Set quota for 2 times Set quota for 3 times Set quota for 4 times Set quota for 5 times Set quota for 6 times Set quota for 7 times Set quota for 8 times Set quota for 9 times Set quota for 10 times Set quota for 11 times Set quota for 12 times Set quota for 13 times Set quota for 14 times Set quota for 15 times Set quota for 16 times Set quota for 17 times Set quota for 18 times Set quota for 19 times Set quota for 20 times Set quota for 21 times Set quota for 22 times (dd_pid=26754, time=0)successful (dd_pid=26755, time=0)successful Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 21 (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 22: enable/disable quota by 'lctl conf_param/set_param -P' ========================================================== 09:25:04 (1713533104) Set both mdt & ost quota type as ug Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Restart... Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:) Stopping client oleg257-client.virtnet /mnt/lustre opts: Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11836) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg257-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions oleg257-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012dc35800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012dc35800.idle_timeout=debug Verify if quota is enabled Set both mdt & ost quota type as none Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Restart... Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:) Stopping client oleg257-client.virtnet /mnt/lustre opts: Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11836) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg257-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012dc36000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012dc36000.idle_timeout=debug Verify if quota is disabled PASS 22 (139s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 23: Quota should be honored with directIO (b16125) ========================================================== 09:27:25 (1713533245) OST0_SIZE: 3605408 required: 6144 run for 4MB test file Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' User quota (limit: 4 MB) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 4096 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Step1: trigger EDQUOT with O_DIRECT Write half of file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0779837 s, 26.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=3] [seek=2] [oflag=direct] [conv=notrunc] dd: error writing '/mnt/lustre/d23.sanity-quota/f23.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0530924 s, 19.8 MB/s Step1: done Step2: rewrite should succeed running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=1] [oflag=direct] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0397874 s, 26.4 MB/s Step2: done Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 OST0_SIZE: 3605408 required: 61440 run for 40MB test file Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 40 MB) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 40960 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Step1: trigger EDQUOT with O_DIRECT Write half of file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.71099 s, 29.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=21] [seek=20] [oflag=direct] [conv=notrunc] dd: error writing '/mnt/lustre/d23.sanity-quota/f23.sanity-quota': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.661403 s, 30.1 MB/s Step1: done Step2: rewrite should succeed running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=1] [oflag=direct] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0396916 s, 26.4 MB/s Step2: done Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 23 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 24: lfs draws an asterix when limit is reached (b16646) ========================================================== 09:28:14 (1713533294) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set user quota (limit: 5M) running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d24.sanity-quota/f24.sanity-quota] [count=6] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.161278 s, 39.0 MB/s /mnt/lustre 6144* 0 5120 - 1 0 0 - 6144* - 6144 - - - - - Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 24 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 25: check indexes versions ========== 09:28:31 (1713533311) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.141104 s, 37.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.139511 s, 37.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0721758 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 25 (43s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27a: lfs quota/setquota should handle wrong arguments (b19612) ========================================================== 09:29:16 (1713533356) lfs quota: name and mount point must be specified Display disk usage and limits. usage: quota [-q] [-v] [-h] [-o OBD_UUID|-i MDT_IDX|-I OST_IDX] [{-u|-g|-p} UNAME|UID|GNAME|GID|PROJID] [--pool ] quota -t <-u|-g|-p> [--pool ] quota [-q] [-v] [h] {-U|-G|-P} [--pool ] quota -a {-u|-g|-p} [-s start_qid] [-e end_qid] lfs setquota: either -u or -g must be specified setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 27a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27b: lfs quota/setquota should handle user/group/project ID (b20200) ========================================================== 09:29:21 (1713533361) lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr 60000 (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp 60000 (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 PASS 27b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27c: lfs quota should support human-readable output ========================================================== 09:29:27 (1713533367) PASS 27c (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27d: lfs setquota should support fraction block limit ========================================================== 09:29:33 (1713533373) PASS 27d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 30: Hard limit updates should not reset grace times ========================================================== 09:29:38 (1713533378) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.23945 s, 35.0 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8192* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8192 - 9264 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9264 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.129903 s, 8.1 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9216* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9216 - 9264 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9264 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.061939 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 30 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 33: Basic usage tracking for user & group & project ========================================================== 09:30:07 (1713533407) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write files... lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-0 Iteration 0/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-1 Iteration 1/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-2 Iteration 2/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-3 Iteration 3/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-4 Iteration 4/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-5 Iteration 5/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-6 Iteration 6/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-7 Iteration 7/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-8 Iteration 8/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-9 Iteration 9/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-10 Iteration 10/10 completed Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage after write Verify inode usage after write Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Verify disk usage after delete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 33 (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 34: Usage transfer for user & group & project ========================================================== 09:30:53 (1713533453) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... chown the file to user 60000 Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for user 60000 chgrp the file to group 60000 Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for group 60000 chown the file to user 60001 Wait for setattr on objects finished... Waiting for MDT destroys to complete change_project project id to 1000 lfs project -p 1000 /mnt/lustre/d34.sanity-quota/f34.sanity-quota Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for user 60001/60000 and group 60000 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 34 (61s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 35: Usage is still accessible across reboot ========================================================== 09:31:56 (1713533516) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... lfs project -p 1000 /mnt/lustre/d35.sanity-quota/f35.sanity-quota Wait for setattr on objects finished... Waiting for MDT destroys to complete Save disk usage before restart User 60000: 2048KB 1 inodes Group 60000: 2048KB 1 inodes Project 1000: 2048KB 1 inodes Restart... Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:) Stopping client oleg257-client.virtnet /mnt/lustre opts: Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6ed3800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6ed3800.idle_timeout=debug affected facets: Verify disk usage after restart Append to the same file... Verify space usage is increased Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 35 (106s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 37: Quota accounted properly for file created by 'lfs setstripe' ========================================================== 09:33:43 (1713533623) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0744502 s, 14.1 MB/s Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 37 (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 38: Quota accounting iterator doesn't skip id entries ========================================================== 09:34:09 (1713533649) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Create 10000 files... Found 10000 id entries Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 38 (483s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 39: Project ID interface works correctly ========================================================== 09:42:14 (1713534134) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1024 /mnt/lustre/d39.sanity-quota/project Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:) Stopping client oleg257-client.virtnet /mnt/lustre opts: Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11836) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg257-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a7e5b800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a7e5b800.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 39 (79s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40a: Hard link across different project ID ========================================================== 09:43:34 (1713534214) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40a.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40a.sanity-quota/dir2 ln: failed to create hard link '/mnt/lustre/d40a.sanity-quota/dir2/1_link' => '/mnt/lustre/d40a.sanity-quota/dir1/1': Invalid cross-device link Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40a (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40b: Mv across different project ID ========================================================== 09:43:51 (1713534231) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40b.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40b.sanity-quota/dir2 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40b (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40c: Remote child Dir inherit project quota properly ========================================================== 09:44:10 (1713534250) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40c.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40c (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40d: Stripe Directory inherit project quota properly ========================================================== 09:44:26 (1713534266) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d40d.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40d (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 41: df should return projid-specific values ========================================================== 09:44:41 (1713534281) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' striped dir -i1 -c2 -H crush /mnt/lustre/d41.sanity-quota/dir lfs project -sp 41000 /mnt/lustre/d41.sanity-quota/dir == global statfs: /mnt/lustre == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.202.157@tcp:/lustre 7666232 4832 7209208 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.202.157@tcp:/lustre 523966 598 523368 1% /mnt/lustre Disk quotas for prj 41000 (pid 41000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/d41.sanity-quota/dir 12 0 102400 - 3 0 4096 - == project statfs (prjid=41000): /mnt/lustre/d41.sanity-quota/dir == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.202.157@tcp:/lustre 102400 12 102388 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.202.157@tcp:/lustre 4096 3 4093 1% /mnt/lustre llite.lustre-ffff8800a7e5b800.statfs_project=0 llite.lustre-ffff8800a7e5b800.statfs_project=1 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 41 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 48: lfs quota --delete should delete quota project ID ========================================================== 09:45:08 (1713534308) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0466616 s, 22.5 MB/s - id: 60000 osd-ldiskfs - id: 60000 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0404764 s, 25.9 MB/s - id: 60000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_user: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0446103 s, 23.5 MB/s - id: 60000 osd-ldiskfs - id: 60000 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0464443 s, 22.6 MB/s - id: 60000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_group: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0313782 s, 33.4 MB/s - id: 10000 osd-ldiskfs - id: 10000 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0357151 s, 29.4 MB/s - id: 10000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_project: No such file or directory - id: 10000 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 48 (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 49: lfs quota -a prints the quota usage for all quota IDs ========================================================== 09:45:50 (1713534350) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 setquota for users and groups fail_loc=0xa09 lfs setquota: 1000 / 31 seconds fail_loc=0 903 0 0 102400 - 0 0 10240 - 904 0 0 102400 - 0 0 10240 - 905 0 0 102400 - 0 0 10240 - 906 0 0 102400 - 0 0 10240 - 907 0 0 102400 - 0 0 10240 - 908 0 0 102400 - 0 0 10240 - 909 0 0 102400 - 0 0 10240 - 910 0 0 102400 - 0 0 10240 - 911 0 0 102400 - 0 0 10240 - 912 0 0 102400 - 0 0 10240 - 913 0 0 102400 - 0 0 10240 - 914 0 0 102400 - 0 0 10240 - 915 0 0 102400 - 0 0 10240 - 916 0 0 102400 - 0 0 10240 - 917 0 0 102400 - 0 0 10240 - 918 0 0 102400 - 0 0 10240 - 919 0 0 102400 - 0 0 10240 - 920 0 0 102400 - 0 0 10240 - 921 0 0 102400 - 0 0 10240 - 922 0 0 102400 - 0 0 10240 - 923 0 0 102400 - 0 0 10240 - 924 0 0 102400 - 0 0 10240 - 925 0 0 102400 - 0 0 10240 - 926 0 0 102400 - 0 0 10240 - 927 0 0 102400 - 0 0 10240 - 928 0 0 102400 - 0 0 10240 - 929 0 0 102400 - 0 0 10240 - 930 0 0 102400 - 0 0 10240 - 931 0 0 102400 - 0 0 10240 - 932 0 0 102400 - 0 0 10240 - 933 0 0 102400 - 0 0 10240 - 934 0 0 102400 - 0 0 10240 - 935 0 0 102400 - 0 0 10240 - 936 0 0 102400 - 0 0 10240 - 937 0 0 102400 - 0 0 10240 - 938 0 0 102400 - 0 0 10240 - 939 0 0 102400 - 0 0 10240 - 940 0 0 102400 - 0 0 10240 - 941 0 0 102400 - 0 0 10240 - 942 0 0 102400 - 0 0 10240 - 943 0 0 102400 - 0 0 10240 - 944 0 0 102400 - 0 0 10240 - 945 0 0 102400 - 0 0 10240 - 946 0 0 102400 - 0 0 10240 - 947 0 0 102400 - 0 0 10240 - 948 0 0 102400 - 0 0 10240 - 949 0 0 102400 - 0 0 10240 - 950 0 0 102400 - 0 0 10240 - 951 0 0 102400 - 0 0 10240 - 952 0 0 102400 - 0 0 10240 - 953 0 0 102400 - 0 0 10240 - 954 0 0 102400 - 0 0 10240 - 955 0 0 102400 - 0 0 10240 - 956 0 0 102400 - 0 0 10240 - 957 0 0 102400 - 0 0 10240 - 958 0 0 102400 - 0 0 10240 - 959 0 0 102400 - 0 0 10240 - 960 0 0 102400 - 0 0 10240 - 961 0 0 102400 - 0 0 10240 - 962 0 0 102400 - 0 0 10240 - 963 0 0 102400 - 0 0 10240 - 964 0 0 102400 - 0 0 10240 - 965 0 0 102400 - 0 0 10240 - 966 0 0 102400 - 0 0 10240 - 967 0 0 102400 - 0 0 10240 - 968 0 0 102400 - 0 0 10240 - 969 0 0 102400 - 0 0 10240 - 970 0 0 102400 - 0 0 10240 - 971 0 0 102400 - 0 0 10240 - 972 0 0 102400 - 0 0 10240 - 973 0 0 102400 - 0 0 10240 - 974 0 0 102400 - 0 0 10240 - 975 0 0 102400 - 0 0 10240 - 976 0 0 102400 - 0 0 10240 - 977 0 0 102400 - 0 0 10240 - 978 0 0 102400 - 0 0 10240 - 979 0 0 102400 - 0 0 10240 - 980 0 0 102400 - 0 0 10240 - 981 0 0 102400 - 0 0 10240 - 982 0 0 102400 - 0 0 10240 - 983 0 0 102400 - 0 0 10240 - 984 0 0 102400 - 0 0 10240 - 985 0 0 102400 - 0 0 10240 - 986 0 0 102400 - 0 0 10240 - 987 0 0 102400 - 0 0 10240 - 988 0 0 102400 - 0 0 10240 - 989 0 0 102400 - 0 0 10240 - 990 0 0 102400 - 0 0 10240 - 991 0 0 102400 - 0 0 10240 - 992 0 0 102400 - 0 0 10240 - 993 0 0 102400 - 0 0 10240 - 994 0 0 102400 - 0 0 10240 - 995 0 0 102400 - 0 0 10240 - 996 0 0 102400 - 0 0 10240 - 997 0 0 102400 - 0 0 10240 - 998 0 0 102400 - 0 0 10240 - polkitd 0 0 102400 - 0 0 10240 - green 0 0 102400 - 0 0 10240 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all usr quota: 1000 / 0 seconds 903 0 0 204800 - 0 0 20480 - 904 0 0 204800 - 0 0 20480 - 905 0 0 204800 - 0 0 20480 - 906 0 0 204800 - 0 0 20480 - 907 0 0 204800 - 0 0 20480 - 908 0 0 204800 - 0 0 20480 - 909 0 0 204800 - 0 0 20480 - 910 0 0 204800 - 0 0 20480 - 911 0 0 204800 - 0 0 20480 - 912 0 0 204800 - 0 0 20480 - 913 0 0 204800 - 0 0 20480 - 914 0 0 204800 - 0 0 20480 - 915 0 0 204800 - 0 0 20480 - 916 0 0 204800 - 0 0 20480 - 917 0 0 204800 - 0 0 20480 - 918 0 0 204800 - 0 0 20480 - 919 0 0 204800 - 0 0 20480 - 920 0 0 204800 - 0 0 20480 - 921 0 0 204800 - 0 0 20480 - 922 0 0 204800 - 0 0 20480 - 923 0 0 204800 - 0 0 20480 - 924 0 0 204800 - 0 0 20480 - 925 0 0 204800 - 0 0 20480 - 926 0 0 204800 - 0 0 20480 - 927 0 0 204800 - 0 0 20480 - 928 0 0 204800 - 0 0 20480 - 929 0 0 204800 - 0 0 20480 - 930 0 0 204800 - 0 0 20480 - 931 0 0 204800 - 0 0 20480 - 932 0 0 204800 - 0 0 20480 - 933 0 0 204800 - 0 0 20480 - 934 0 0 204800 - 0 0 20480 - 935 0 0 204800 - 0 0 20480 - 936 0 0 204800 - 0 0 20480 - 937 0 0 204800 - 0 0 20480 - 938 0 0 204800 - 0 0 20480 - 939 0 0 204800 - 0 0 20480 - 940 0 0 204800 - 0 0 20480 - 941 0 0 204800 - 0 0 20480 - 942 0 0 204800 - 0 0 20480 - 943 0 0 204800 - 0 0 20480 - 944 0 0 204800 - 0 0 20480 - 945 0 0 204800 - 0 0 20480 - 946 0 0 204800 - 0 0 20480 - 947 0 0 204800 - 0 0 20480 - 948 0 0 204800 - 0 0 20480 - 949 0 0 204800 - 0 0 20480 - 950 0 0 204800 - 0 0 20480 - 951 0 0 204800 - 0 0 20480 - 952 0 0 204800 - 0 0 20480 - 953 0 0 204800 - 0 0 20480 - 954 0 0 204800 - 0 0 20480 - 955 0 0 204800 - 0 0 20480 - 956 0 0 204800 - 0 0 20480 - 957 0 0 204800 - 0 0 20480 - 958 0 0 204800 - 0 0 20480 - 959 0 0 204800 - 0 0 20480 - 960 0 0 204800 - 0 0 20480 - 961 0 0 204800 - 0 0 20480 - 962 0 0 204800 - 0 0 20480 - 963 0 0 204800 - 0 0 20480 - 964 0 0 204800 - 0 0 20480 - 965 0 0 204800 - 0 0 20480 - 966 0 0 204800 - 0 0 20480 - 967 0 0 204800 - 0 0 20480 - 968 0 0 204800 - 0 0 20480 - 969 0 0 204800 - 0 0 20480 - 970 0 0 204800 - 0 0 20480 - 971 0 0 204800 - 0 0 20480 - 972 0 0 204800 - 0 0 20480 - 973 0 0 204800 - 0 0 20480 - 974 0 0 204800 - 0 0 20480 - 975 0 0 204800 - 0 0 20480 - 976 0 0 204800 - 0 0 20480 - 977 0 0 204800 - 0 0 20480 - 978 0 0 204800 - 0 0 20480 - 979 0 0 204800 - 0 0 20480 - 980 0 0 204800 - 0 0 20480 - 981 0 0 204800 - 0 0 20480 - 982 0 0 204800 - 0 0 20480 - 983 0 0 204800 - 0 0 20480 - 984 0 0 204800 - 0 0 20480 - 985 0 0 204800 - 0 0 20480 - 986 0 0 204800 - 0 0 20480 - 987 0 0 204800 - 0 0 20480 - 988 0 0 204800 - 0 0 20480 - 989 0 0 204800 - 0 0 20480 - 990 0 0 204800 - 0 0 20480 - 991 0 0 204800 - 0 0 20480 - 992 0 0 204800 - 0 0 20480 - 993 0 0 204800 - 0 0 20480 - 994 0 0 204800 - 0 0 20480 - systemd-network 0 0 204800 - 0 0 20480 - systemd-bus-proxy 0 0 204800 - 0 0 20480 - input 0 0 204800 - 0 0 20480 - polkitd 0 0 204800 - 0 0 20480 - ssh_keys 0 0 204800 - 0 0 20480 - green 0 0 204800 - 0 0 20480 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all grp quota: 1000 / 0 seconds Create 991 files... - open/close 885 (time 1713534399.35 total 10.00 last 88.47) total: 991 open/close in 11.34 seconds: 87.35 ops/second 951 4 0 102400 - 1 0 10240 - 952 4 0 102400 - 1 0 10240 - 953 4 0 102400 - 1 0 10240 - 954 4 0 102400 - 1 0 10240 - 955 4 0 102400 - 1 0 10240 - 956 4 0 102400 - 1 0 10240 - 957 4 0 102400 - 1 0 10240 - 958 4 0 102400 - 1 0 10240 - 959 4 0 102400 - 1 0 10240 - 960 4 0 102400 - 1 0 10240 - 961 4 0 102400 - 1 0 10240 - 962 4 0 102400 - 1 0 10240 - 963 4 0 102400 - 1 0 10240 - 964 4 0 102400 - 1 0 10240 - 965 4 0 102400 - 1 0 10240 - 966 4 0 102400 - 1 0 10240 - 967 4 0 102400 - 1 0 10240 - 968 4 0 102400 - 1 0 10240 - 969 4 0 102400 - 1 0 10240 - 970 4 0 102400 - 1 0 10240 - 971 4 0 102400 - 1 0 10240 - 972 4 0 102400 - 1 0 10240 - 973 4 0 102400 - 1 0 10240 - 974 4 0 102400 - 1 0 10240 - 975 4 0 102400 - 1 0 10240 - 976 4 0 102400 - 1 0 10240 - 977 4 0 102400 - 1 0 10240 - 978 4 0 102400 - 1 0 10240 - 979 4 0 102400 - 1 0 10240 - 980 4 0 102400 - 1 0 10240 - 981 4 0 102400 - 1 0 10240 - 982 4 0 102400 - 1 0 10240 - 983 4 0 102400 - 1 0 10240 - 984 4 0 102400 - 1 0 10240 - 985 4 0 102400 - 1 0 10240 - 986 4 0 102400 - 1 0 10240 - 987 4 0 102400 - 1 0 10240 - 988 4 0 102400 - 1 0 10240 - 989 4 0 102400 - 1 0 10240 - 990 4 0 102400 - 1 0 10240 - 991 4 0 102400 - 1 0 10240 - 992 4 0 102400 - 1 0 10240 - 993 4 0 102400 - 1 0 10240 - 994 4 0 102400 - 1 0 10240 - 995 4 0 102400 - 1 0 10240 - 996 4 0 102400 - 1 0 10240 - 997 4 0 102400 - 1 0 10240 - 998 4 0 102400 - 1 0 10240 - polkitd 4 0 102400 - 1 0 10240 - green 4 0 102400 - 1 0 10240 - time=0, rate=991/0 951 4 0 204800 - 1 0 20480 - 952 4 0 204800 - 1 0 20480 - 953 4 0 204800 - 1 0 20480 - 954 4 0 204800 - 1 0 20480 - 955 4 0 204800 - 1 0 20480 - 956 4 0 204800 - 1 0 20480 - 957 4 0 204800 - 1 0 20480 - 958 4 0 204800 - 1 0 20480 - 959 4 0 204800 - 1 0 20480 - 960 4 0 204800 - 1 0 20480 - 961 4 0 204800 - 1 0 20480 - 962 4 0 204800 - 1 0 20480 - 963 4 0 204800 - 1 0 20480 - 964 4 0 204800 - 1 0 20480 - 965 4 0 204800 - 1 0 20480 - 966 4 0 204800 - 1 0 20480 - 967 4 0 204800 - 1 0 20480 - 968 4 0 204800 - 1 0 20480 - 969 4 0 204800 - 1 0 20480 - 970 4 0 204800 - 1 0 20480 - 971 4 0 204800 - 1 0 20480 - 972 4 0 204800 - 1 0 20480 - 973 4 0 204800 - 1 0 20480 - 974 4 0 204800 - 1 0 20480 - 975 4 0 204800 - 1 0 20480 - 976 4 0 204800 - 1 0 20480 - 977 4 0 204800 - 1 0 20480 - 978 4 0 204800 - 1 0 20480 - 979 4 0 204800 - 1 0 20480 - 980 4 0 204800 - 1 0 20480 - 981 4 0 204800 - 1 0 20480 - 982 4 0 204800 - 1 0 20480 - 983 4 0 204800 - 1 0 20480 - 984 4 0 204800 - 1 0 20480 - 985 4 0 204800 - 1 0 20480 - 986 4 0 204800 - 1 0 20480 - 987 4 0 204800 - 1 0 20480 - 988 4 0 204800 - 1 0 20480 - 989 4 0 204800 - 1 0 20480 - 990 4 0 204800 - 1 0 20480 - 991 4 0 204800 - 1 0 20480 - 992 4 0 204800 - 1 0 20480 - 993 4 0 204800 - 1 0 20480 - 994 4 0 204800 - 1 0 20480 - systemd-network 4 0 204800 - 1 0 20480 - systemd-bus-proxy 4 0 204800 - 1 0 20480 - input 4 0 204800 - 1 0 20480 - polkitd 4 0 204800 - 1 0 20480 - ssh_keys 4 0 204800 - 1 0 20480 - green 4 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713534410 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second Create 991 files... - open/close 972 (time 1713534430.72 total 10.01 last 97.15) total: 991 open/close in 10.17 seconds: 97.43 ops/second 951 4 0 102400 - 1 0 10240 - 952 4 0 102400 - 1 0 10240 - 953 4 0 102400 - 1 0 10240 - 954 4 0 102400 - 1 0 10240 - 955 4 0 102400 - 1 0 10240 - 956 4 0 102400 - 1 0 10240 - 957 4 0 102400 - 1 0 10240 - 958 4 0 102400 - 1 0 10240 - 959 4 0 102400 - 1 0 10240 - 960 4 0 102400 - 1 0 10240 - 961 4 0 102400 - 1 0 10240 - 962 4 0 102400 - 1 0 10240 - 963 4 0 102400 - 1 0 10240 - 964 4 0 102400 - 1 0 10240 - 965 4 0 102400 - 1 0 10240 - 966 4 0 102400 - 1 0 10240 - 967 4 0 102400 - 1 0 10240 - 968 4 0 102400 - 1 0 10240 - 969 4 0 102400 - 1 0 10240 - 970 4 0 102400 - 1 0 10240 - 971 4 0 102400 - 1 0 10240 - 972 4 0 102400 - 1 0 10240 - 973 4 0 102400 - 1 0 10240 - 974 4 0 102400 - 1 0 10240 - 975 4 0 102400 - 1 0 10240 - 976 4 0 102400 - 1 0 10240 - 977 4 0 102400 - 1 0 10240 - 978 4 0 102400 - 1 0 10240 - 979 4 0 102400 - 1 0 10240 - 980 4 0 102400 - 1 0 10240 - 981 4 0 102400 - 1 0 10240 - 982 4 0 102400 - 1 0 10240 - 983 4 0 102400 - 1 0 10240 - 984 4 0 102400 - 1 0 10240 - 985 4 0 102400 - 1 0 10240 - 986 4 0 102400 - 1 0 10240 - 987 4 0 102400 - 1 0 10240 - 988 4 0 102400 - 1 0 10240 - 989 4 0 102400 - 1 0 10240 - 990 4 0 102400 - 1 0 10240 - 991 4 0 102400 - 1 0 10240 - 992 4 0 102400 - 1 0 10240 - 993 4 0 102400 - 1 0 10240 - 994 4 0 102400 - 1 0 10240 - 995 4 0 102400 - 1 0 10240 - 996 4 0 102400 - 1 0 10240 - 997 4 0 102400 - 1 0 10240 - 998 4 0 102400 - 1 0 10240 - polkitd 4 0 102400 - 1 0 10240 - green 4 0 102400 - 1 0 10240 - time=0, rate=991/0 951 4 0 204800 - 1 0 20480 - 952 4 0 204800 - 1 0 20480 - 953 4 0 204800 - 1 0 20480 - 954 4 0 204800 - 1 0 20480 - 955 4 0 204800 - 1 0 20480 - 956 4 0 204800 - 1 0 20480 - 957 4 0 204800 - 1 0 20480 - 958 4 0 204800 - 1 0 20480 - 959 4 0 204800 - 1 0 20480 - 960 4 0 204800 - 1 0 20480 - 961 4 0 204800 - 1 0 20480 - 962 4 0 204800 - 1 0 20480 - 963 4 0 204800 - 1 0 20480 - 964 4 0 204800 - 1 0 20480 - 965 4 0 204800 - 1 0 20480 - 966 4 0 204800 - 1 0 20480 - 967 4 0 204800 - 1 0 20480 - 968 4 0 204800 - 1 0 20480 - 969 4 0 204800 - 1 0 20480 - 970 4 0 204800 - 1 0 20480 - 971 4 0 204800 - 1 0 20480 - 972 4 0 204800 - 1 0 20480 - 973 4 0 204800 - 1 0 20480 - 974 4 0 204800 - 1 0 20480 - 975 4 0 204800 - 1 0 20480 - 976 4 0 204800 - 1 0 20480 - 977 4 0 204800 - 1 0 20480 - 978 4 0 204800 - 1 0 20480 - 979 4 0 204800 - 1 0 20480 - 980 4 0 204800 - 1 0 20480 - 981 4 0 204800 - 1 0 20480 - 982 4 0 204800 - 1 0 20480 - 983 4 0 204800 - 1 0 20480 - 984 4 0 204800 - 1 0 20480 - 985 4 0 204800 - 1 0 20480 - 986 4 0 204800 - 1 0 20480 - 987 4 0 204800 - 1 0 20480 - 988 4 0 204800 - 1 0 20480 - 989 4 0 204800 - 1 0 20480 - 990 4 0 204800 - 1 0 20480 - 991 4 0 204800 - 1 0 20480 - 992 4 0 204800 - 1 0 20480 - 993 4 0 204800 - 1 0 20480 - 994 4 0 204800 - 1 0 20480 - systemd-network 4 0 204800 - 1 0 20480 - systemd-bus-proxy 4 0 204800 - 1 0 20480 - input 4 0 204800 - 1 0 20480 - polkitd 4 0 204800 - 1 0 20480 - ssh_keys 4 0 204800 - 1 0 20480 - green 4 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713534440 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second fail_loc=0xa08 fail_loc=0 Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg257-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server oleg257-server: oleg257-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions oleg257-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800aa6f6000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800aa6f6000.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 49 (200s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 50: Test if lfs find --projid works ========================================================== 09:49:12 (1713534552) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d50.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d50.sanity-quota/dir2 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 50 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 51: Test project accounting with mv/cp ========================================================== 09:49:24 (1713534564) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d51.sanity-quota/dir 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00837214 s, 125 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 51 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 52: Rename normal file across project ID ========================================================== 09:49:40 (1713534580) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.505416 s, 207 MB/s Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102404 0 0 - 2 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4 0 0 - 1 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting rename '/mnt/lustre/d52.sanity-quota/t52_dir1' returned -1: Invalid cross-device link rename directory return 255 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4 0 0 - 1 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102404 0 0 - 2 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 52 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 53: Project inherit attribute could be cleared ========================================================== 09:49:59 (1713534599) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -s /mnt/lustre/d53.sanity-quota/dir lfs project -C /mnt/lustre/d53.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 53 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 54: basic lfs project interface test ========================================================== 09:50:07 (1713534607) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d54.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d54.sanity-quota/f54.sanity-quota-0] [100] total: 100 create in 0.23 seconds: 436.57 ops/second lfs project -rCk /mnt/lustre/d54.sanity-quota lfs project -rC /mnt/lustre/d54.sanity-quota - unlinked 0 (time 1713534611 ; total 0 ; last 0) total: 100 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 54 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 55: Chgrp should be affected by group quota ========================================================== 09:50:17 (1713534617) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d55.sanity-quota/f55.sanity-quota] [bs=1024] [count=100000] 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 12.4854 s, 8.2 MB/s Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 51200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] chgrp: changing group of '/mnt/lustre/d55.sanity-quota/f55.sanity-quota': Disk quota exceeded 0 Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 1 0 0 - lustre-MDT0000_UUID 0 - 114688 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 55 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 56: lfs quota -t should work well === 09:50:51 (1713534651) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 56 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 57: lfs project could tolerate errors ========================================================== 09:51:02 (1713534662) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 57 (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 58: project ID should be kept for new mirrors created by FID ========================================================== 09:51:17 (1713534677) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] test by mirror created with normal file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.89086 s, 27.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.1846 s, 26.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Waiting for MDT destroys to complete test by mirror created with FID running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.91933 s, 27.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.17299 s, 26.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 58 (52s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 59: lfs project dosen't crash kernel with project disabled ========================================================== 09:52:11 (1713534731) Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:) Stopping client oleg257-client.virtnet /mnt/lustre opts: Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11836) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg257-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a7e5a000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a7e5a000.idle_timeout=debug Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs: failed to set xattr for '/mnt/lustre/d59.sanity-quota/f59.sanity-quota-0': Operation not supported Stopping clients: oleg257-client.virtnet /mnt/lustre (opts:) Stopping client oleg257-client.virtnet /mnt/lustre opts: Stopping clients: oleg257-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg257-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg257-server tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11836) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg257-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg257-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.202.57,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg257-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg257-server' oleg257-server: oleg257-server.virtnet: executing load_modules_local oleg257-server: Loading modules from /home/green/git/lustre-release/lustre oleg257-server: detected 4 online CPUs by sysfs oleg257-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Starting client oleg257-client.virtnet: -o user_xattr,flock oleg257-server@tcp:/lustre /mnt/lustre Started clients oleg257-client.virtnet: 192.168.202.157@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800aa665800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800aa665800.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 59 (146s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 60: Test quota for root with setgid ========================================================== 09:54:39 (1713534879) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' lfs setquota: warning: inode hardlimit '100' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 100 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d60.sanity-quota/f60.sanity-quota] [99] total: 99 create in 0.25 seconds: 397.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] touch: cannot touch '/mnt/lustre/d60.sanity-quota/foo': Disk quota exceeded running as uid/gid/euid/egid 0/0/0/0, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 60 (18s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_61 skipping SLOW test 61 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 62: Project inherit should be only changed by root ========================================================== 09:54:59 (1713534899) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [-p] [/mnt/lustre/d62.sanity-quota/] lfs project -s /mnt/lustre/d62.sanity-quota/ running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [chattr] [-P] [/mnt/lustre/d62.sanity-quota/] chattr: Operation not permitted while setting flags on /mnt/lustre/d62.sanity-quota/ Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 62 (7s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_63 skipping excluded test 63 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 64: lfs project on non dir/files should succeed ========================================================== 09:55:09 (1713534909) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 64 (14s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_65 skipping excluded test 65 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 66: nonroot user can not change project state in default ========================================================== 09:55:26 (1713534926) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 mdt.lustre-MDT0000.enable_chprojid_gid=0 mdt.lustre-MDT0001.enable_chprojid_gid=0 lfs project -sp 1000 /mnt/lustre/d66.sanity-quota/foo running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [0] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-C] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted lfs project -C /mnt/lustre/d66.sanity-quota/foo/foo mdt.lustre-MDT0000.enable_chprojid_gid=-1 mdt.lustre-MDT0001.enable_chprojid_gid=-1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-rC] [/mnt/lustre/d66.sanity-quota/foo/] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/bar] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/bar': Operation not permitted lfs project -p 1000 /mnt/lustre/d66.sanity-quota/foo/bar mdt.lustre-MDT0000.enable_chprojid_gid=0 mdt.lustre-MDT0001.enable_chprojid_gid=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 66 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 67: quota pools recalculation ======= 09:55:44 (1713534944) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) granted 0x0 before write 0 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 osd-ldiskfs.lustre-OST0001.quota_slave.force_reint=1 affected facets: ost1 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg257-server: *.lustre-OST0000.recovery_status status: INACTIVE affected facets: ost2 oleg257-server: oleg257-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg257-server: *.lustre-OST0001.recovery_status status: INACTIVE file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-0 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-0 2 user 3 10 4 quota_usr Write... Fri Apr 19 09:55:54 EDT 2024 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0915766 s, 115 MB/s Fri Apr 19 09:55:54 EDT 2024 Fri Apr 19 09:55:54 EDT 2024 Fri Apr 19 09:55:55 EDT 2024 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 global granted 11264 qpool1 granted 0 Adding targets to pool oleg257-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Granted 11 MB file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-1 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-1 2 user 3 10 4 quota_2usr Write... Fri Apr 19 09:56:06 EDT 2024 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.100958 s, 104 MB/s Fri Apr 19 09:56:06 EDT 2024 Fri Apr 19 09:56:06 EDT 2024 Fri Apr 19 09:56:07 EDT 2024 granted_mb 10 file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 2 user 3 10 4 quota_2usr Write... Fri Apr 19 09:56:09 EDT 2024 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-2] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.108644 s, 96.5 MB/s Fri Apr 19 09:56:09 EDT 2024 Fri Apr 19 09:56:11 EDT 2024 Fri Apr 19 09:56:12 EDT 2024 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 granted_mb 20 Removing lustre-OST0000_UUID from qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 67 (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 68: slave number in quota pool changed after each add/remove OST ========================================================== 09:56:50 (1713535010) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 nr result 4 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Adding targets to pool oleg257-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Removing lustre-OST0000_UUID from qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Removing lustre-OST0001_UUID from qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 68 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 69: EDQUOT at one of pools shouldn't affect DOM ========================================================== 09:57:24 (1713535044) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 User quota (block hardlimit:200 MB) User quota (block hardlimit:10 MB) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.75164 s, 191 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 3.03175 s, 173 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.088419 s, 119 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0127895 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.37427 s, 221 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 3.11636 s, 168 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 69 (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70a: check lfs setquota/quota with a pool option ========================================================== 09:58:10 (1713535090) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 hard limit 20480 limit 20 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 20480 - 0 0 0 - Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 70a (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70b: lfs setquota pool works properly ========================================================== 09:58:30 (1713535110) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed PASS 70b (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71a: Check PFL with quota pools ===== 09:58:46 (1713535126) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:100 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0593832 s, 177 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 8+0 records in 7+0 records out 8343552 bytes (8.3 MB) copied, 0.070343 s, 119 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0239894 s, 0.0 kB/s Waiting for MDT destroys to complete running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0662845 s, 158 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=20] [seek=10] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 17+0 records in 16+0 records out 16777216 bytes (17 MB) copied, 0.100457 s, 167 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=1] [seek=30] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': No data available 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00183428 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=0] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0747764 s, 140 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 71a (61s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71b: Check SEL with quota pools ===== 09:59:49 (1713535189) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:1000 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=128] 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 0.869365 s, 154 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=5] [seek=128] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0497362 s, 105 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=5] [seek=133] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0357129 s, 147 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=2] [seek=138] dd: error writing '/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.016825 s, 0.0 kB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 71b (43s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 72: lfs quota --pool prints only pool's OSTs ========================================================== 10:00:34 (1713535234) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:50 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0001_UUID ' used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0381107 s, 138 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0301437 s, 174 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0158452 s, 0.0 kB/s used 10240 Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 72 (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73a: default limits at OST Pool Quotas ========================================================== 10:01:15 (1713535275) Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 LIMIT=20480 TESTFILE=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0 qdtype=-U qh=-B qid=quota_usr qprjid=1000 qres_type=data qs=-b qtype=-u Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 set to use default quota lfs setquota: '-d' deprecated, use '-D' or '--default' set default quota get default quota Disk default usr quota: Filesystem bquota blimit bgrace iquota ilimit igrace /mnt/lustre 0 0 10 0 0 10 Test not out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=10] [oflag=sync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.715636 s, 14.7 MB/s Test out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 20+0 records in 19+0 records out 20963328 bytes (21 MB) copied, 2.66557 s, 7.9 MB/s Increase default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 2.18415 s, 19.2 MB/s Set quota to override default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.32753 s, 15.8 MB/s Set to use default quota again lfs setquota: '-d' deprecated, use '-D' or '--default' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 2.29567 s, 18.3 MB/s Cleanup Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed PASS 73a (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73b: default OST Pool Quotas limit for new user ========================================================== 10:02:24 (1713535344) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 set default quota for qpool1 Write from user that hasn't lqe running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73b.sanity-quota/f73b.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.277421 s, 37.8 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 73b (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 74: check quota pools per user ====== 10:03:00 (1713535380) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Adding targets to pool oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 pool limit for qpool1 10240 pool limit for qpool2 51200 Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 74 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 75: nodemap squashed root respects quota enforcement ========================================================== 10:03:40 (1713535420) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 On MGS 192.168.202.157, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.157, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.157, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.202.157, default.squash_uid = nodemap.default.squash_uid=60000 waiting 10 secs for sync 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.271629 s, 38.6 MB/s Write to exceed soft limit 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.018129 s, 565 kB/s mmap write when over soft limit Waiting for MDT destroys to complete Write... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.238242 s, 44.0 MB/s Write out of block quota ... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.258241 s, 40.6 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/f75.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0718212 s, 0.0 kB/s Waiting for MDT destroys to complete 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0743125 s, 14.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0707187 s, 14.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0739424 s, 14.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0674938 s, 15.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0737998 s, 14.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0698216 s, 15.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.069144 s, 15.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0740852 s, 14.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.069126 s, 15.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0638639 s, 16.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0541669 s, 19.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0522148 s, 20.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0686161 s, 15.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0557975 s, 18.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0477593 s, 22.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0629056 s, 16.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0691403 s, 15.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0710279 s, 14.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0715992 s, 14.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0591344 s, 17.7 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-20': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0587755 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-21': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0482175 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-22': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0501399 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-23': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0473978 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-24': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0496594 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-25': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0467794 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-26': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0575958 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-27': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0537455 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-28': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0453655 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-29': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0588476 s, 0.0 kB/s 9+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.276354 s, 34.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0487563 s, 21.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0530308 s, 19.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0464222 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0454452 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0449517 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0486232 s, 21.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0608859 s, 17.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0556817 s, 18.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0443482 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0364006 s, 28.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0362481 s, 28.9 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-11': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0345652 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-12': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0314875 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-13': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0322913 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-14': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0334493 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-15': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0329714 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-16': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0342409 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-17': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0434369 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-18': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0338948 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0323567 s, 0.0 kB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0541312 s, 19.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0533069 s, 19.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0572482 s, 18.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0579424 s, 18.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0540167 s, 19.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0562149 s, 18.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0574789 s, 18.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0568713 s, 18.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0577324 s, 18.2 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/file': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.357013 s, 26.4 MB/s On MGS 192.168.202.157, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.157, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.157, active = nodemap.active=0 waiting 10 secs for sync Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 75 (137s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 76: project ID 4294967295 should be not allowed ========================================================== 10:05:59 (1713535559) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Invalid project ID: 4294967295 Change or list project attribute for specified file or directory. usage: project [-d|-r] list project ID and flags on file(s) or directories project [-p id] [-s] [-r] set project ID and/or inherit flag for specified file(s) or directories project -c [-d|-r [-p id] [-0]] check project ID and flags on file(s) or directories, print outliers project -C [-d|-r] [-k] clear the project inherit flag and ID on the file or directory Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 76 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 77: lfs setquota should fail in Lustre mount with 'ro' ========================================================== 10:06:15 (1713535575) Starting client: oleg257-client.virtnet: -o ro oleg257-server@tcp:/lustre /mnt/lustre2 lfs setquota: quotactl failed: Read-only file system setquota failed: Read-only file system PASS 77 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78A: Check fallocate increase quota usage ========================================================== 10:06:20 (1713535580) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l] [204800] [/mnt/lustre/d78A.sanity-quota/f78A.sanity-quota] kbytes returned:204 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 78A (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78a: Check fallocate increase projectid usage ========================================================== 10:06:36 (1713535596) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 5200 /mnt/lustre/d78a.sanity-quota kbytes returned:204 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 78a (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 79: access to non-existed dt-pool/info doesn't cause a panic ========================================================== 10:06:56 (1713535616) /tmp/f79.sanity-quota Creating new pool oleg257-server: Pool lustre.qpool1 created Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed PASS 79 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 80: check for EDQUOT after OST failover ========================================================== 10:07:08 (1713535628) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 /mnt/lustre/d80.sanity-quota/dir1 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 1 /mnt/lustre/d80.sanity-quota/dir2 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8 0 102400 - 2 0 0 - lustre-MDT0000_UUID 8 - 16384 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_loc=0xa06 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir2/f80.sanity-quota-0] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.0457968 s, 68.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-2] [count=7] 7+0 records in 7+0 records out 7340032 bytes (7.3 MB) copied, 0.0658363 s, 111 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-1] [count=1] [oflag=direct] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0157523 s, 66.6 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11272* 0 10240 - 5 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 8192* - 8192 - - - - - Total allocated inode limit: 0, total allocated block limit: 12288 Stopping /mnt/lustre-ost2 (opts:) on oleg257-server fail_loc=0 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-OST0001 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-0] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0303136 s, 69.2 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 80 (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 81: Race qmt_start_pool_recalc with qmt_pool_free ========================================================== 10:08:00 (1713535680) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg257-server: Pool lustre.qpool1 created fail_loc=0x80000A07 fail_val=10 Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Stopping /mnt/lustre-mds1 (opts:-f) on oleg257-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg257-server: oleg257-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg257-client: oleg257-server: ssh exited with exit code 1 Started lustre-MDT0000 pdsh@oleg257-client: oleg257-client: ssh exited with exit code 5 Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 81 (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 82: verify more than 8 qids for single operation ========================================================== 10:08:38 (1713535718) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 82 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 83: Setting default quota shouldn't affect grace time ========================================================== 10:08:46 (1713535726) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 83 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 84: Reset quota should fix the insane granted quota ========================================================== 10:08:55 (1713535735) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10485760 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 0 /mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 130 0x82 0x280000401 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=60] [conv=nocreat] [oflag=direct] 60+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 2.17052 s, 29.0 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 10485760 - 2 0 0 - lustre-MDT0000_UUID 4 - 1048576 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 fail_val=0 fail_loc=0xa08 fail_val=0 fail_loc=0xa08 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 0 - 2 0 0 - lustre-MDT0000_UUID 4 - 18446744073707374604 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 fail_val=0 fail_loc=0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 0 - 2 0 0 - lustre-MDT0000_UUID 4 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 102400 - 2 0 0 - lustre-MDT0000_UUID 4* - 4 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440* - 61440 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 61440 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] dd: error writing '/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1': Disk quota exceeded 100+0 records in 99+0 records out 103809024 bytes (104 MB) copied, 3.06592 s, 33.9 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 101380 0 307200 - 2 0 0 - lustre-MDT0000_UUID 4* - 4 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 101376 - 102396 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 102396 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 6.54201 s, 32.1 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 84 (59s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 85: do not hung at write with the least_qunit ========================================================== 10:09:56 (1713535796) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg257-server: Pool lustre.qpool1 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg257-server: Pool lustre.qpool2 created Adding targets to pool oleg257-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0] [count=10] dd: error writing '/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0': Disk quota exceeded 8+0 records in 7+0 records out 8368128 bytes (8.4 MB) copied, 0.257692 s, 32.5 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg257-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg257-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg257-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg257-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 85 (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 86: Pre-acquired quota should be released if quota is over limit ========================================================== 10:10:43 (1713535843) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2883 (time 1713535856.15 total 10.00 last 288.26) total: 5000 create in 16.74 seconds: 298.76 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 3114 (time 1713535912.12 total 10.00 last 311.36) total: 5000 create in 15.36 seconds: 325.42 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second lfs project -sp 1000 /mnt/lustre/d86.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 3040 (time 1713535966.91 total 10.00 last 303.98) total: 5000 create in 15.94 seconds: 313.63 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 86 (176s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity-quota test complete, duration 5026 sec ========= 10:13:42 (1713536022) === sanity-quota: start cleanup 10:13:42 (1713536022) === === sanity-quota: finish cleanup 10:13:42 (1713536022) ===