-----============= acceptance-small: sanity-quota ============----- Thu Apr 18 03:23:31 EDT 2024 excepting tests: 2 4a 63 65 skipping tests SLOW=no: 61 oleg452-server: debugfs 1.46.2.wc5 (26-Mar-2022) pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 === sanity-quota: start setup 03:23:35 (1713425015) === oleg452-client.virtnet: executing check_config_client /mnt/lustre oleg452-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg452-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b5d40000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b5d40000.idle_timeout=debug oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 === sanity-quota: finish setup 03:23:42 (1713425022) === using SAVE_PROJECT_SUPPORTED=0 oleg452-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg452-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg452-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg452-server: debugfs 1.46.2.wc5 (26-Mar-2022) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [true] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d0_runas_test/f7521] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [true] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [touch] [/mnt/lustre/d0_runas_test/f7521] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 0: Test basic quota performance ===== 03:23:51 (1713425031) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.295004 s, 35.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.293646 s, 35.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 0 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1a: Block hard limit (normal use and out of quota) ========================================================== 03:24:08 (1713425048) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.134936 s, 38.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.156863 s, 33.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0435489 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.116241 s, 45.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.138466 s, 37.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0456322 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:10 mb) lfs project -p 1000 /mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.142454 s, 36.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.16944 s, 30.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.045462 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1a (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1b: Quota pools: Block hard limit (normal use and out of quota) ========================================================== 03:25:18 (1713425118) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.159385 s, 32.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.133407 s, 39.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0421102 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:20 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.135306 s, 38.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.144493 s, 36.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0473796 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.163819 s, 32.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.129663 s, 40.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0409299 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1b (78s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1c: Quota pools: check 3 pools with hardlimit only for global ========================================================== 03:26:37 (1713425197) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg452-server: Pool lustre.qpool2 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.261387 s, 40.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.23783 s, 44.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0454043 s, 0.0 kB/s qpool1 used 20484 qpool2 used 20484 Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1c (49s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1d: Quota pools: check block hardlimit on different pools ========================================================== 03:27:27 (1713425247) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Updated after 2s: want 'lustre-OST0000_UUID lustre-OST0001_UUID ' got 'lustre-OST0000_UUID lustre-OST0001_UUID ' Creating new pool oleg452-server: Pool lustre.qpool2 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.136433 s, 38.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.13144 s, 39.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0550343 s, 0.0 kB/s qpool1 used 10240 qpool2 used 10240 Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1d (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1e: Quota pools: global pool high block limit vs quota pool with small ========================================================== 03:28:19 (1713425299) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:53000000 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.14562 s, 36.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.139333 s, 37.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0456694 s, 0.0 kB/s Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-1] [count=20] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.547512 s, 38.3 MB/s Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1e (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1f: Quota pools: correct qunit after removing/adding OST ========================================================== 03:28:57 (1713425337) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.137008 s, 38.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.139839 s, 37.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0411563 s, 0.0 kB/s Removing lustre-OST0000_UUID from qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Waiting for MDT destroys to complete Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.14734 s, 35.6 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.141572 s, 37.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0475031 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1f (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1g: Quota pools: Block hard limit with wide striping ========================================================== 03:29:49 (1713425389) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 osc.lustre-OST0000-osc-ffff8800b5d40000.max_dirty_mb=1 osc.lustre-OST0001-osc-ffff8800b5d40000.max_dirty_mb=1 User quota (block hardlimit:40 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.38485 s, 7.6 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 2.61613 s, 4.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=6] [seek=20] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0399724 s, 0.0 kB/s Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed osc.lustre-OST0000-osc-ffff8800b5d40000.max_dirty_mb=467 osc.lustre-OST0001-osc-ffff8800b5d40000.max_dirty_mb=467 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1g (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1h: Block hard limit test using fallocate ========================================================== 03:30:27 (1713425427) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write 5MiB Using Fallocate running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l5MiB] [/mnt/lustre/d1h.sanity-quota/f1h.sanity-quota-0] Write 11MiB Using Fallocate running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l11MiB] [/mnt/lustre/d1h.sanity-quota/f1h.sanity-quota-0] fallocate: fallocate failed: Disk quota exceeded Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1h (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1i: Quota pools: different limit and usage relations ========================================================== 03:30:51 (1713425451) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.14307 s, 36.6 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.143703 s, 36.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0437876 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10240 0 0 - 1 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 10240* - 10240 - - - - - Total allocated inode limit: 0, total allocated block limit: 10240 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.152796 s, 34.3 MB/s Waiting for MDT destroys to complete Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.135157 s, 38.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.138308 s, 37.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0442728 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.110945 s, 28.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.0875044 s, 35.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [seek=3] [count=1] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0432433 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1i (52s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1j: Enable project quota enforcement for root ========================================================== 03:31:44 (1713425504) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0 osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=1 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.462057 s, 43.1 MB/s running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=10] [seek=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0222265 s, 0.0 kB/s osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [seek=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.461812 s, 45.4 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1j (17s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_2 skipping excluded test 2 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3a: Block soft limit (start timer, timer goes off, stop timer) ========================================================== 03:32:03 (1713425523) User quota (soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.116416 s, 36.0 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00974016 s, 1.1 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148* 4096 0 21s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00368833 s, 2.8 MB/s Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.665244 s, 6.3 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00410416 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8264 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 4096 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.119953 s, 35.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Group quota (soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.105432 s, 39.8 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.008853 s, 1.2 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148* 4096 0 21s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 4160 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4160 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00565847 s, 1.8 MB/s Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4168 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4168 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00576885 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00493722 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4168 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4168 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 4096 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 1064 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1064 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.16726 s, 25.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Project quota (soft limit:4 MB grace:20 sec) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.12029 s, 34.9 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00747968 s, 1.4 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4108* 4096 0 21s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4144 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4144 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00309436 s, 3.3 MB/s Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4144 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4144 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.654813 s, 6.4 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00382603 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8216* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8216 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 4096 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.119402 s, 35.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 3a (158s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3b: Quota pools: Block soft limit (start timer, expires, stop timer) ========================================================== 03:34:43 (1713425683) limit 4 glbl_limit 8 grace 20 glbl_grace 40 User quota in qpool1(soft limit:4 MB grace:20 seconds) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0975396 s, 43.0 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0091993 s, 1.1 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00409459 s, 2.5 MB/s Quota info for qpool1: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 21s 2 0 0 - Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.649601 s, 6.5 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00379196 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 40s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256* - 8256 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8256 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 1064 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1064 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.108253 s, 38.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Group quota in qpool1(soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0980925 s, 42.8 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00830091 s, 1.2 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00390417 s, 2.6 MB/s Quota info for qpool1: Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 21s 2 0 0 - Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.696692 s, 6.0 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00386345 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 41s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8264 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.109326 s, 38.4 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Project quota in qpool1(soft:4 MB grace:20 sec) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.10936 s, 38.4 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00863721 s, 1.2 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4108 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00387938 s, 2.6 MB/s Quota info for qpool1: Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120* 4096 0 21s 1 0 0 - Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.579875 s, 7.2 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00367492 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8216* 8192 0 41s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8216 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.121659 s, 34.5 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed PASS 3b (170s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3c: Quota pools: check block soft limit on different pools ========================================================== 03:37:34 (1713425854) limit 4 limit2 8 glbl_limit 12 grace1 30 grace2 20 glbl_grace 40 User quota in qpool2(soft:8 MB grace:20 seconds) Creating new pool oleg452-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg452-server: Pool lustre.qpool2 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.23713 s, 35.4 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=8192] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00750758 s, 1.4 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8244 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8204 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8244 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8204 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=9216] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00333279 s, 3.1 MB/s Quota info for qpool2: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 21s 2 0 0 - Grace time is 21s Sleep through grace ... ...sleep 26 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=4096] [seek=10240] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00551215 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=14336] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00443788 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 12288 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.235879 s, 35.6 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed PASS 3c (80s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_4a skipping excluded test 4a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 4b: Grace time strings handling ===== 03:38:56 (1713425936) Waiting for MDT destroys to complete Valid grace strings test Block grace time: 1w3d; Inode grace time: 16m40s Block grace time: 5s; Inode grace time: 1w2d3h4m5s Invalid grace strings test lfs: bad inode-grace: 5c setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: 18446744073709551615 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: -1 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 4b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 5: Chown & chgrp successfully even out of block/file quota ========================================================== 03:38:59 (1713425939) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Set quota limit (0 10M 0 10) for quota_usr.quota_usr lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Create more than 10 files and more than 10 MB ... total: 11 create in 0.02 seconds: 457.25 ops/second lfs project -p 1000 /mnt/lustre/d5.sanity-quota/f5.sanity-quota-0_1 11+0 records in 11+0 records out 11534336 bytes (12 MB) copied, 0.24368 s, 47.3 MB/s Chown files to quota_usr.quota_usr ... - unlinked 0 (time 1713425949 ; total 1 ; last 1) total: 11 unlinks in 1 seconds: 11.000000 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 5 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 6: Test dropping acquire request on master ========================================================== 03:39:19 (1713425959) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0499308 s, 21.0 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0317059 s, 33.1 MB/s at_max=20 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 3+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.119037 s, 17.6 MB/s Waiting for MDT destroys to complete fail_val=601 fail_loc=0x513 osd-ldiskfs.lustre-OST0000.quota_slave.timeout=10 osd-ldiskfs.lustre-OST0001.quota_slave.timeout=10 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.265144 s, 11.9 MB/s Sleep for 41 seconds ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] at_max=600 fail_val=0 fail_loc=0 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 56.156 s, 56.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 6 (84s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7a: Quota reintegration (global index) ========================================================== 03:40:45 (1713426045) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg452-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Start ost1... Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 5234688 bytes (5.2 MB) copied, 1.1606 s, 4.5 MB/s Waiting for MDT destroys to complete Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg452-server Start ost1... Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.440017 s, 14.3 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7a (60s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7b: Quota reintegration (slave index) ========================================================== 03:41:46 (1713426106) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0917256 s, 11.4 MB/s fail_val=0 fail_loc=0xa02 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [seek=1] [oflag=sync] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.064101 s, 16.4 MB/s fail_val=0 fail_loc=0 Restart ost to trigger reintegration... Stopping /mnt/lustre-ost1 (opts:) on oleg452-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7b (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7c: Quota reintegration (restart mds during reintegration) ========================================================== 03:42:27 (1713426147) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' fail_val=0 fail_loc=0xa03 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 osd-ldiskfs.lustre-OST0001.quota_slave.force_reint=1 Stop mds... Stopping /mnt/lustre-mds1 (opts:) on oleg452-server fail_val=0 fail_loc=0 Start mds... Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE Waiting 200s for 'glb[1],slv[1],reint[0]' Waiting 190s for 'glb[1],slv[1],reint[0]' Waiting 180s for 'glb[1],slv[1],reint[0]' Waiting 160s for 'glb[1],slv[1],reint[0]' Waiting 150s for 'glb[1],slv[1],reint[0]' Waiting 140s for 'glb[1],slv[1],reint[0]' Waiting 120s for 'glb[1],slv[1],reint[0]' Waiting 110s for 'glb[1],slv[1],reint[0]' Waiting 100s for 'glb[1],slv[1],reint[0]' Waiting 90s for 'glb[1],slv[1],reint[0]' Updated after 112s: want 'glb[1],slv[1],reint[0]' got 'glb[1],slv[1],reint[0]' affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 5234688 bytes (5.2 MB) copied, 1.3629 s, 3.8 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7c (145s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7d: Quota reintegration (Transfer index in multiple bulks) ========================================================== 03:44:54 (1713426294) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' fail_val=0 fail_loc=0x608 Waiting 90s for 'u' affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.3109 s, 16.0 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1': Disk quota exceeded 20+0 records in 19+0 records out 20963328 bytes (21 MB) copied, 2.08125 s, 10.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7d (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7e: Quota reintegration (inode limits) ========================================================== 03:45:24 (1713426324) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Stop mds2... Stopping /mnt/lustre-mds2 (opts:) on oleg452-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Updated after 3s: want 'ugp' got 'ugp' Start mds2... Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg452-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg452-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg452-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg452-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg452-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg452-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg452-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg452-server: *.lustre-MDT0001.recovery_status status: COMPLETE create remote dir running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] mknod(/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota2048) error: Disk quota exceeded total: 2048 create in 6.52 seconds: 313.93 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2048] - unlinked 0 (time 1713426363 ; total 0 ; last 0) total: 2048 unlinks in 13 seconds: 157.538467 unlinks/second Waiting for MDT destroys to complete Stop mds2... Stopping /mnt/lustre-mds2 (opts:) on oleg452-server Start mds2... Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg452-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg452-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg452-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg452-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg452-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg452-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg452-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg452-server: *.lustre-MDT0001.recovery_status status: COMPLETE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] total: 2049 create in 4.90 seconds: 418.39 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] - unlinked 0 (time 1713426400 ; total 0 ; last 0) total: 2049 unlinks in 12 seconds: 170.750000 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7e (91s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 8: Run dbench with quota enabled ==== 03:46:57 (1713426417) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set enough high limit for user: quota_usr Set enough high limit for group: quota_usr lfs project -sp 1000 /mnt/lustre/d8.sanity-quota Set enough high limit for project: 1000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [bash] [rundbench] [-D] [/mnt/lustre/d8.sanity-quota] [3] [-t] [120] looking for dbench program /usr/bin/dbench found dbench client file /usr/share/dbench/client.txt '/usr/share/dbench/client.txt' -> 'client.txt' running 'dbench 3 -t 120' on /mnt/lustre/d8.sanity-quota at Thu Apr 18 03:47:01 EDT 2024 waiting for dbench pid 25963 dbench version 4.00 - Copyright Andrew Tridgell 1999-2004 Running for 120 seconds with load 'client.txt' and minimum warmup 24 secs failed to create barrier semaphore 2 of 3 processes prepared for launch 0 sec 3 of 3 processes prepared for launch 0 sec releasing clients 3 287 32.05 MB/sec warmup 1 sec latency 21.065 ms 3 632 31.55 MB/sec warmup 2 sec latency 19.690 ms 3 976 21.87 MB/sec warmup 3 sec latency 15.708 ms 3 1476 19.53 MB/sec warmup 4 sec latency 14.870 ms 3 1942 15.87 MB/sec warmup 5 sec latency 14.519 ms 3 2407 14.03 MB/sec warmup 6 sec latency 17.821 ms 3 2987 14.36 MB/sec warmup 7 sec latency 12.471 ms 3 3634 14.48 MB/sec warmup 8 sec latency 12.290 ms 3 3968 13.35 MB/sec warmup 9 sec latency 12.571 ms 3 4309 12.12 MB/sec warmup 10 sec latency 29.941 ms 3 4695 11.48 MB/sec warmup 11 sec latency 20.766 ms 3 5110 11.30 MB/sec warmup 12 sec latency 18.004 ms 3 5418 10.49 MB/sec warmup 13 sec latency 18.489 ms 3 5913 10.07 MB/sec warmup 14 sec latency 16.120 ms 3 6427 10.42 MB/sec warmup 15 sec latency 18.434 ms 3 7044 10.60 MB/sec warmup 16 sec latency 15.241 ms 3 7375 10.39 MB/sec warmup 17 sec latency 19.980 ms 3 7659 9.87 MB/sec warmup 18 sec latency 15.300 ms 3 7963 9.44 MB/sec warmup 19 sec latency 31.380 ms 3 8304 9.19 MB/sec warmup 20 sec latency 15.983 ms 3 8776 9.21 MB/sec warmup 21 sec latency 16.420 ms 3 9256 8.85 MB/sec warmup 22 sec latency 17.431 ms 3 9784 9.02 MB/sec warmup 23 sec latency 20.829 ms 3 10978 7.73 MB/sec execute 1 sec latency 15.122 ms 3 11338 4.39 MB/sec execute 2 sec latency 31.194 ms 3 11778 4.43 MB/sec execute 3 sec latency 13.308 ms 3 12326 5.96 MB/sec execute 4 sec latency 16.882 ms 3 12786 5.04 MB/sec execute 5 sec latency 13.843 ms 3 13275 6.19 MB/sec execute 6 sec latency 13.866 ms 3 13909 7.09 MB/sec execute 7 sec latency 17.615 ms 3 14436 7.93 MB/sec execute 8 sec latency 11.561 ms 3 14779 7.29 MB/sec execute 9 sec latency 16.125 ms 3 15120 6.77 MB/sec execute 10 sec latency 39.854 ms 3 15535 7.05 MB/sec execute 11 sec latency 28.635 ms 3 15958 6.79 MB/sec execute 12 sec latency 13.143 ms 3 16454 6.54 MB/sec execute 13 sec latency 17.080 ms 3 16997 7.03 MB/sec execute 14 sec latency 20.698 ms 3 17671 7.65 MB/sec execute 15 sec latency 11.990 ms 3 18080 7.65 MB/sec execute 16 sec latency 12.105 ms 3 18426 7.27 MB/sec execute 17 sec latency 44.702 ms 3 18813 7.09 MB/sec execute 18 sec latency 18.011 ms 3 19291 7.28 MB/sec execute 19 sec latency 14.386 ms 3 19744 6.98 MB/sec execute 20 sec latency 15.515 ms 3 20204 6.88 MB/sec execute 21 sec latency 13.179 ms 3 20880 7.40 MB/sec execute 22 sec latency 15.731 ms 3 21463 7.74 MB/sec execute 23 sec latency 17.999 ms 3 21802 7.51 MB/sec execute 24 sec latency 11.750 ms 3 22169 7.29 MB/sec execute 25 sec latency 31.605 ms 3 22610 7.40 MB/sec execute 26 sec latency 17.994 ms 3 23040 7.27 MB/sec execute 27 sec latency 13.623 ms 3 23549 7.13 MB/sec execute 28 sec latency 14.527 ms 3 24069 7.28 MB/sec execute 29 sec latency 12.262 ms 3 24713 7.60 MB/sec execute 30 sec latency 16.205 ms 3 25173 7.62 MB/sec execute 31 sec latency 11.826 ms 3 25513 7.45 MB/sec execute 32 sec latency 26.001 ms 3 25905 7.34 MB/sec execute 33 sec latency 12.462 ms 3 26368 7.36 MB/sec execute 34 sec latency 11.915 ms 3 26850 7.27 MB/sec execute 35 sec latency 15.336 ms 3 27315 7.21 MB/sec execute 36 sec latency 17.564 ms 3 27707 7.31 MB/sec execute 37 sec latency 19.427 ms 3 28351 7.53 MB/sec execute 38 sec latency 19.193 ms 3 28754 7.53 MB/sec execute 39 sec latency 12.391 ms 3 29103 7.40 MB/sec execute 40 sec latency 30.288 ms 3 29484 7.31 MB/sec execute 41 sec latency 18.981 ms 3 29936 7.32 MB/sec execute 42 sec latency 15.125 ms 3 30428 7.25 MB/sec execute 43 sec latency 15.440 ms 3 30900 7.20 MB/sec execute 44 sec latency 14.197 ms 3 31585 7.45 MB/sec execute 45 sec latency 12.133 ms 3 32133 7.61 MB/sec execute 46 sec latency 14.600 ms 3 32488 7.49 MB/sec execute 47 sec latency 12.239 ms 3 32849 7.38 MB/sec execute 48 sec latency 22.583 ms 3 33299 7.42 MB/sec execute 49 sec latency 15.699 ms 3 33764 7.37 MB/sec execute 50 sec latency 11.717 ms 3 34241 7.30 MB/sec execute 51 sec latency 13.546 ms 3 34889 7.42 MB/sec execute 52 sec latency 11.186 ms 3 35445 7.56 MB/sec execute 53 sec latency 12.608 ms 3 35863 7.56 MB/sec execute 54 sec latency 11.754 ms 3 36210 7.47 MB/sec execute 55 sec latency 21.724 ms 3 36602 7.40 MB/sec execute 56 sec latency 13.058 ms 3 37061 7.41 MB/sec execute 57 sec latency 12.291 ms 3 37560 7.36 MB/sec execute 58 sec latency 13.183 ms 3 38084 7.42 MB/sec execute 59 sec latency 13.297 ms 3 38759 7.59 MB/sec execute 60 sec latency 12.938 ms 3 39212 7.57 MB/sec execute 61 sec latency 18.433 ms 3 39598 7.54 MB/sec execute 62 sec latency 31.446 ms 3 39946 7.48 MB/sec execute 63 sec latency 21.007 ms 3 40384 7.47 MB/sec execute 64 sec latency 13.865 ms 3 40860 7.43 MB/sec execute 65 sec latency 13.291 ms 3 41305 7.38 MB/sec execute 66 sec latency 12.081 ms 3 41967 7.47 MB/sec execute 67 sec latency 11.516 ms 3 42515 7.58 MB/sec execute 68 sec latency 12.226 ms 3 42949 7.58 MB/sec execute 69 sec latency 11.600 ms 3 43324 7.51 MB/sec execute 70 sec latency 25.963 ms 3 43700 7.45 MB/sec execute 71 sec latency 17.102 ms 3 44165 7.46 MB/sec execute 72 sec latency 14.671 ms 3 44681 7.44 MB/sec execute 73 sec latency 11.165 ms 3 45243 7.49 MB/sec execute 74 sec latency 12.248 ms 3 45836 7.57 MB/sec execute 75 sec latency 11.471 ms 3 46328 7.59 MB/sec execute 76 sec latency 14.067 ms 3 46715 7.56 MB/sec execute 77 sec latency 25.401 ms 3 47089 7.51 MB/sec execute 78 sec latency 17.584 ms 3 47549 7.51 MB/sec execute 79 sec latency 14.134 ms 3 48028 7.48 MB/sec execute 80 sec latency 11.243 ms 3 48476 7.45 MB/sec execute 81 sec latency 12.387 ms 3 49132 7.53 MB/sec execute 82 sec latency 11.778 ms 3 49648 7.61 MB/sec execute 83 sec latency 15.356 ms 3 50077 7.59 MB/sec execute 84 sec latency 11.590 ms 3 50432 7.54 MB/sec execute 85 sec latency 27.062 ms 3 50866 7.55 MB/sec execute 86 sec latency 15.310 ms 3 51270 7.49 MB/sec execute 87 sec latency 12.698 ms 3 51774 7.48 MB/sec execute 88 sec latency 12.063 ms 3 52292 7.50 MB/sec execute 89 sec latency 11.419 ms 3 52864 7.57 MB/sec execute 90 sec latency 12.912 ms 3 53342 7.59 MB/sec execute 91 sec latency 16.948 ms 3 53741 7.56 MB/sec execute 92 sec latency 18.672 ms 3 54087 7.53 MB/sec execute 93 sec latency 26.613 ms 3 54540 7.52 MB/sec execute 94 sec latency 19.872 ms 3 54985 7.47 MB/sec execute 95 sec latency 13.675 ms 3 55455 7.46 MB/sec execute 96 sec latency 13.899 ms 3 56092 7.51 MB/sec execute 97 sec latency 14.048 ms 3 56617 7.58 MB/sec execute 98 sec latency 11.929 ms 3 57072 7.60 MB/sec execute 99 sec latency 14.929 ms 3 57431 7.55 MB/sec execute 100 sec latency 24.586 ms 3 57809 7.51 MB/sec execute 101 sec latency 14.965 ms 3 58260 7.50 MB/sec execute 102 sec latency 15.544 ms 3 58726 7.48 MB/sec execute 103 sec latency 18.179 ms 3 59166 7.47 MB/sec execute 104 sec latency 12.183 ms 3 59815 7.52 MB/sec execute 105 sec latency 11.437 ms 3 60321 7.59 MB/sec execute 106 sec latency 13.464 ms 3 60765 7.57 MB/sec execute 107 sec latency 14.541 ms 3 61129 7.53 MB/sec execute 108 sec latency 18.122 ms 3 61585 7.54 MB/sec execute 109 sec latency 25.432 ms 3 62010 7.49 MB/sec execute 110 sec latency 13.320 ms 3 62506 7.48 MB/sec execute 111 sec latency 14.376 ms 3 63056 7.52 MB/sec execute 112 sec latency 11.562 ms 3 63644 7.56 MB/sec execute 113 sec latency 11.170 ms 3 64108 7.58 MB/sec execute 114 sec latency 12.914 ms 3 64509 7.56 MB/sec execute 115 sec latency 24.316 ms 3 64876 7.52 MB/sec execute 116 sec latency 19.085 ms 3 65307 7.52 MB/sec execute 117 sec latency 18.789 ms 3 65797 7.50 MB/sec execute 118 sec latency 14.313 ms 3 66283 7.53 MB/sec execute 119 sec latency 13.767 ms 3 cleanup 120 sec 0 cleanup 120 sec Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 29342 6.104 39.846 Close 21544 1.156 11.290 Rename 1237 8.513 19.860 Unlink 5925 3.578 17.983 Qpathinfo 26567 1.569 25.418 Qfileinfo 4617 0.387 3.220 Qfsinfo 4845 4.489 20.990 Sfileinfo 2384 4.617 19.804 Find 10253 0.683 17.090 WriteX 14445 1.697 15.793 ReadX 45579 0.060 1.589 LockX 94 1.169 3.079 UnlockX 94 1.196 2.557 Flush 2055 6.084 44.675 Throughput 7.52833 MB/sec 3 clients 3 procs max_latency=44.702 ms stopping dbench on /mnt/lustre/d8.sanity-quota at Thu Apr 18 03:49:26 EDT 2024 with return code 0 clean dbench files on /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota removed directory: 'clients/client1/~dmtmp/WORD' removed directory: 'clients/client1/~dmtmp/PWRPNT' removed directory: 'clients/client1/~dmtmp/PARADOX' removed directory: 'clients/client1/~dmtmp/COREL' removed directory: 'clients/client1/~dmtmp/SEED' removed directory: 'clients/client1/~dmtmp/WORDPRO' removed directory: 'clients/client1/~dmtmp/EXCEL' removed directory: 'clients/client1/~dmtmp/ACCESS' removed directory: 'clients/client1/~dmtmp/PM' removed directory: 'clients/client1/~dmtmp' removed directory: 'clients/client1' removed directory: 'clients/client0/~dmtmp/WORD' removed directory: 'clients/client0/~dmtmp/PWRPNT' removed directory: 'clients/client0/~dmtmp/PARADOX' removed directory: 'clients/client0/~dmtmp/COREL' removed directory: 'clients/client0/~dmtmp/SEED' removed directory: 'clients/client0/~dmtmp/WORDPRO' removed directory: 'clients/client0/~dmtmp/EXCEL' removed directory: 'clients/client0/~dmtmp/ACCESS' removed directory: 'clients/client0/~dmtmp/PM' removed directory: 'clients/client0/~dmtmp' removed directory: 'clients/client0' removed directory: 'clients/client2/~dmtmp/WORD' removed directory: 'clients/client2/~dmtmp/PWRPNT' removed directory: 'clients/client2/~dmtmp/PARADOX' removed directory: 'clients/client2/~dmtmp/COREL' removed directory: 'clients/client2/~dmtmp/SEED' removed directory: 'clients/client2/~dmtmp/WORDPRO' removed directory: 'clients/client2/~dmtmp/EXCEL' removed directory: 'clients/client2/~dmtmp/ACCESS' removed directory: 'clients/client2/~dmtmp/PM' removed directory: 'clients/client2/~dmtmp' removed directory: 'clients/client2' removed directory: 'clients' removed 'client.txt' /mnt/lustre/d8.sanity-quota dbench successfully finished lfs project -C /mnt/lustre/d8.sanity-quota Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 8 (160s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 9: Block limit larger than 4GB (b10707) ========================================================== 03:49:38 (1713426578) OST0_SIZE: 3600964 required: 4900000 WARN: OST0 has less than 4900000 free, skip this test. PASS 9 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 10: Test quota for root user ======== 03:49:42 (1713426582) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 2048 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d10.sanity-quota/f10.sanity-quota] [count=3] [oflag=sync] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.174454 s, 18.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 10 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 11: Chown/chgrp ignores quota ======= 03:49:59 (1713426599) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' lfs setquota: warning: inode hardlimit '1' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 2* 0 1 - lustre-MDT0000_UUID 0 - 0 - 2* - 2 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 2, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 11 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12a: Block quota rebalancing ======== 03:50:19 (1713426619) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write to ost0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-0] [count=17] [oflag=sync] 17+0 records in 17+0 records out 17825792 bytes (18 MB) copied, 1.0691 s, 16.7 MB/s Write to ost1... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1] [count=17] [oflag=sync] dd: error writing '/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.265634 s, 15.8 MB/s Free space from ost0... Waiting for MDT destroys to complete Write to ost1 after space freed from ost0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1] [count=17] [oflag=sync] 17+0 records in 17+0 records out 17825792 bytes (18 MB) copied, 1.02958 s, 17.3 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 12a (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12b: Inode quota rebalancing ======== 03:50:46 (1713426646) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Create 2048 files on mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota/f12b.sanity-quota] [2048] total: 2048 create in 4.01 seconds: 510.90 ops/second Create files on mdt1... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1] mknod(/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second Free space from mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d12b.sanity-quota/f12b.sanity-quota] [2048] - unlinked 0 (time 1713426654 ; total 0 ; last 0) total: 2048 unlinks in 9 seconds: 227.555557 unlinks/second Waiting for MDT destroys to complete Create files on mdt1 after space freed from mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1024] total: 1024 create in 2.10 seconds: 487.98 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1024] - unlinked 0 (time 1713426667 ; total 0 ; last 0) total: 1024 unlinks in 4 seconds: 256.000000 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 12b (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 13: Cancel per-ID lock in the LRU list ========================================================== 03:51:16 (1713426676) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d13.sanity-quota/f13.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0652451 s, 16.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 13 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 14: check panic in qmt_site_recalc_cb ========================================================== 03:51:36 (1713426696) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d14.sanity-quota/f14.sanity-quota-0] [count=10] [oflag=direct] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.227666 s, 46.1 MB/s Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg452-server Removing lustre-OST0000_UUID from qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 14 (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 15: Set over 4T block quota ========= 03:52:05 (1713426725) Waiting for MDT destroys to complete PASS 15 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16a: lfs quota should skip the inactive MDT/OST ========================================================== 03:52:13 (1713426733) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d16a.sanity-quota/f16a.sanity-quota] [count=50] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.32832 s, 39.5 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 1024 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 1024, total allocated block limit: 65536 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 0, total allocated block limit: 65536 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 1024 - lustre-MDT0001_UUID[inact] [0] - [0] - [0] - [0] - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 1024, total allocated block limit: 65536 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID[inact] [0] - [0] - [0] - [0] - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 49152 - 65536 - - - - - Total allocated inode limit: 0, total allocated block limit: 65536 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 16a (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16b: lfs quota should skip the nonexistent MDT/OST ========================================================== 03:52:23 (1713426743) SKIP: sanity-quota test_16b needs >= 3 MDTs SKIP 16b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 17: DQACQ return recoverable error == 03:52:26 (1713426746) DQACQ return -ENOLCK Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=37 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 2.62175 s, 400 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -EAGAIN Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=11 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.03883 s, 345 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -ETIMEDOUT Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=110 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 2.86711 s, 366 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -ENOTCONN Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=107 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.04174 s, 345 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 17 (92s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 18: MDS failover while writing, no watchdog triggered (b14840) ========================================================== 03:54:00 (1713426840) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (buffered) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2900 1284788 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1904 1285784 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1604 3601296 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3128 7206792 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg452-server Stopping /mnt/lustre-mds1 (opts:) on oleg452-server 03:54:09 (1713426849) shut down Failover mds1 to oleg452-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 03:54:22 (1713426862) targets are mounted 03:54:22 (1713426862) facet_failover done 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 20.8634 s, 5.0 MB/s oleg452-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec (dd_pid=18647, time=0, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102400 0 204800 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 102400 - 114688 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 114688 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (directio) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] [oflag=direct] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2456 1285232 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1904 1285784 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1604 3592840 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3128 7198336 1% /mnt/lustre Fail mds for 40 seconds 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 1.97808 s, 53.0 MB/s Failing mds1 on oleg452-server Stopping /mnt/lustre-mds1 (opts:) on oleg452-server 03:54:40 (1713426880) shut down Failover mds1 to oleg452-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 03:54:54 (1713426894) targets are mounted 03:54:54 (1713426894) facet_failover done oleg452-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec (dd_pid=20929, time=0, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102400 0 204800 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 102400 - 109568 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 109568 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 18 (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 19: Updating admin limits doesn't zero operational limits(b14790) ========================================================== 03:55:11 (1713426911) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set user quota (limit: 5M) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Update quota limits Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.184417 s, 28.4 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5120* 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 5120* - 5120 - - - - - Total allocated inode limit: 0, total allocated block limit: 5120 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] [seek=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0447121 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5120* 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 5120* - 5120 - - - - - Total allocated inode limit: 0, total allocated block limit: 5120 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 19 (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 20: Test if setquota specifiers work properly (b15754) ========================================================== 03:55:32 (1713426932) PASS 20 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 21: Setquota while writing & deleting (b16053) ========================================================== 03:55:40 (1713426940) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set limit(block:10G; file:1000000) for user: quota_usr Set limit(block:10G; file:1000000) for group: quota_usr lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set limit(block:10G; file:) for project: 1000 lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set quota for 1 times Set quota for 2 times Set quota for 3 times Set quota for 4 times Set quota for 5 times Set quota for 6 times Set quota for 7 times Set quota for 8 times Set quota for 9 times Set quota for 10 times Set quota for 11 times Set quota for 12 times Set quota for 13 times Set quota for 14 times Set quota for 15 times Set quota for 16 times Set quota for 17 times Set quota for 18 times Set quota for 19 times Set quota for 20 times Set quota for 21 times Set quota for 22 times (dd_pid=27651, time=0)successful (dd_pid=27654, time=1)successful Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 21 (46s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 22: enable/disable quota by 'lctl conf_param/set_param -P' ========================================================== 03:56:27 (1713426987) Set both mdt & ost quota type as ug Waiting 90s for 'ugp' Updated after 3s: want 'ugp' got 'ugp' Restart... Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:) Stopping client oleg452-client.virtnet /mnt/lustre opts: Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11216) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg452-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42096/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a8ab2800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a8ab2800.idle_timeout=debug Verify if quota is enabled Set both mdt & ost quota type as none Waiting 90s for 'none' Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Restart... Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:) Stopping client oleg452-client.virtnet /mnt/lustre opts: Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11216) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg452-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42096/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions oleg452-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a328e800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a328e800.idle_timeout=debug Verify if quota is disabled PASS 22 (107s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 23: Quota should be honored with directIO (b16125) ========================================================== 03:58:16 (1713427096) OST0_SIZE: 3605408 required: 6144 run for 4MB test file Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' User quota (limit: 4 MB) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 4096 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Step1: trigger EDQUOT with O_DIRECT Write half of file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0505481 s, 41.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=3] [seek=2] [oflag=direct] [conv=notrunc] dd: error writing '/mnt/lustre/d23.sanity-quota/f23.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0416669 s, 25.2 MB/s Step1: done Step2: rewrite should succeed running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=1] [oflag=direct] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.023759 s, 44.1 MB/s Step2: done Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 OST0_SIZE: 3605408 required: 61440 run for 40MB test file Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 40 MB) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 40960 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Step1: trigger EDQUOT with O_DIRECT Write half of file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.392663 s, 53.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=21] [seek=20] [oflag=direct] [conv=notrunc] dd: error writing '/mnt/lustre/d23.sanity-quota/f23.sanity-quota': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.400795 s, 49.7 MB/s Step1: done Step2: rewrite should succeed running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=1] [oflag=direct] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0239089 s, 43.9 MB/s Step2: done Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 23 (42s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 24: lfs draws an asterix when limit is reached (b16646) ========================================================== 03:58:59 (1713427139) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set user quota (limit: 5M) running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d24.sanity-quota/f24.sanity-quota] [count=6] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.174497 s, 36.1 MB/s /mnt/lustre 6144* 0 5120 - 1 0 0 - 6144* - 6144 - - - - - Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 24 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 25: check indexes versions ========== 03:59:15 (1713427155) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.181276 s, 28.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.147795 s, 35.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0458163 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 25 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27a: lfs quota/setquota should handle wrong arguments (b19612) ========================================================== 03:59:53 (1713427193) lfs quota: name and mount point must be specified Display disk usage and limits. usage: quota [-q] [-v] [-h] [-o OBD_UUID|-i MDT_IDX|-I OST_IDX] [{-u|-g|-p} UNAME|UID|GNAME|GID|PROJID] [--pool ] quota -t <-u|-g|-p> [--pool ] quota [-q] [-v] [h] {-U|-G|-P} [--pool ] quota -a {-u|-g|-p} [-s start_qid] [-e end_qid] lfs setquota: either -u or -g must be specified setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 27a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27b: lfs quota/setquota should handle user/group/project ID (b20200) ========================================================== 03:59:56 (1713427196) lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr 60000 (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp 60000 (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 PASS 27b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27c: lfs quota should support human-readable output ========================================================== 04:00:01 (1713427201) PASS 27c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27d: lfs setquota should support fraction block limit ========================================================== 04:00:05 (1713427205) PASS 27d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 30: Hard limit updates should not reset grace times ========================================================== 04:00:09 (1713427209) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 3s: want 'u' got 'u' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.229274 s, 36.6 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8192* 4096 0 1s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8192 - 9264 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9264 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.090565 s, 11.6 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9216* 4096 0 1s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9216 - 9264 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9264 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.04347 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 30 (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 33: Basic usage tracking for user & group & project ========================================================== 04:00:34 (1713427234) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write files... lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-0 Iteration 0/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-1 Iteration 1/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-2 Iteration 2/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-3 Iteration 3/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-4 Iteration 4/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-5 Iteration 5/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-6 Iteration 6/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-7 Iteration 7/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-8 Iteration 8/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-9 Iteration 9/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-10 Iteration 10/10 completed Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage after write Verify inode usage after write Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Verify disk usage after delete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 33 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 34: Usage transfer for user & group & project ========================================================== 04:01:09 (1713427269) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... chown the file to user 60000 Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for user 60000 chgrp the file to group 60000 Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for group 60000 chown the file to user 60001 Wait for setattr on objects finished... Waiting for MDT destroys to complete change_project project id to 1000 lfs project -p 1000 /mnt/lustre/d34.sanity-quota/f34.sanity-quota Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for user 60001/60000 and group 60000 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 34 (54s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 35: Usage is still accessible across reboot ========================================================== 04:02:05 (1713427325) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... lfs project -p 1000 /mnt/lustre/d35.sanity-quota/f35.sanity-quota Wait for setattr on objects finished... Waiting for MDT destroys to complete Save disk usage before restart User 60000: 2048KB 1 inodes Group 60000: 2048KB 1 inodes Project 1000: 2048KB 1 inodes Restart... Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:) Stopping client oleg452-client.virtnet /mnt/lustre opts: Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions oleg452-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800aaf52000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800aaf52000.idle_timeout=debug affected facets: Verify disk usage after restart Append to the same file... Verify space usage is increased Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 35 (90s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 37: Quota accounted properly for file created by 'lfs setstripe' ========================================================== 04:03:36 (1713427416) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0693183 s, 15.1 MB/s Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 37 (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 38: Quota accounting iterator doesn't skip id entries ========================================================== 04:04:00 (1713427440) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Create 10000 files... Found 10000 id entries Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 38 (438s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 39: Project ID interface works correctly ========================================================== 04:11:19 (1713427879) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1024 /mnt/lustre/d39.sanity-quota/project Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:) Stopping client oleg452-client.virtnet /mnt/lustre opts: Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11216) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg452-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42096/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b0bfb800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b0bfb800.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 39 (72s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40a: Hard link across different project ID ========================================================== 04:12:33 (1713427953) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40a.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40a.sanity-quota/dir2 ln: failed to create hard link '/mnt/lustre/d40a.sanity-quota/dir2/1_link' => '/mnt/lustre/d40a.sanity-quota/dir1/1': Invalid cross-device link Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40a (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40b: Mv across different project ID ========================================================== 04:12:46 (1713427966) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40b.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40b.sanity-quota/dir2 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40b (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40c: Remote child Dir inherit project quota properly ========================================================== 04:13:01 (1713427981) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40c.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40c (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40d: Stripe Directory inherit project quota properly ========================================================== 04:13:16 (1713427996) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d40d.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40d (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 41: df should return projid-specific values ========================================================== 04:13:30 (1713428010) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Waiting 90s for 'ugp' striped dir -i1 -c2 -H crush /mnt/lustre/d41.sanity-quota/dir lfs project -sp 41000 /mnt/lustre/d41.sanity-quota/dir == global statfs: /mnt/lustre == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.204.152@tcp:/lustre 7666232 4836 7209204 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.204.152@tcp:/lustre 523966 598 523368 1% /mnt/lustre Disk quotas for prj 41000 (pid 41000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/d41.sanity-quota/dir 12 0 102400 - 3 0 4096 - == project statfs (prjid=41000): /mnt/lustre/d41.sanity-quota/dir == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.204.152@tcp:/lustre 102400 12 102388 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.204.152@tcp:/lustre 4096 3 4093 1% /mnt/lustre llite.lustre-ffff8800b0bfb800.statfs_project=0 llite.lustre-ffff8800b0bfb800.statfs_project=1 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 41 (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 48: lfs quota --delete should delete quota project ID ========================================================== 04:13:57 (1713428037) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0279638 s, 37.5 MB/s - id: 60000 osd-ldiskfs - id: 60000 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0432067 s, 24.3 MB/s - id: 60000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_user: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0488125 s, 21.5 MB/s - id: 60000 osd-ldiskfs - id: 60000 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0451563 s, 23.2 MB/s - id: 60000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_group: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0454956 s, 23.0 MB/s - id: 10000 osd-ldiskfs - id: 10000 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0263738 s, 39.8 MB/s - id: 10000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_project: No such file or directory - id: 10000 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 48 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 49: lfs quota -a prints the quota usage for all quota IDs ========================================================== 04:14:36 (1713428076) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 setquota for users and groups fail_loc=0xa09 lfs setquota: 1000 / 38 seconds fail_loc=0 903 0 0 102400 - 0 0 10240 - 904 0 0 102400 - 0 0 10240 - 905 0 0 102400 - 0 0 10240 - 906 0 0 102400 - 0 0 10240 - 907 0 0 102400 - 0 0 10240 - 908 0 0 102400 - 0 0 10240 - 909 0 0 102400 - 0 0 10240 - 910 0 0 102400 - 0 0 10240 - 911 0 0 102400 - 0 0 10240 - 912 0 0 102400 - 0 0 10240 - 913 0 0 102400 - 0 0 10240 - 914 0 0 102400 - 0 0 10240 - 915 0 0 102400 - 0 0 10240 - 916 0 0 102400 - 0 0 10240 - 917 0 0 102400 - 0 0 10240 - 918 0 0 102400 - 0 0 10240 - 919 0 0 102400 - 0 0 10240 - 920 0 0 102400 - 0 0 10240 - 921 0 0 102400 - 0 0 10240 - 922 0 0 102400 - 0 0 10240 - 923 0 0 102400 - 0 0 10240 - 924 0 0 102400 - 0 0 10240 - 925 0 0 102400 - 0 0 10240 - 926 0 0 102400 - 0 0 10240 - 927 0 0 102400 - 0 0 10240 - 928 0 0 102400 - 0 0 10240 - 929 0 0 102400 - 0 0 10240 - 930 0 0 102400 - 0 0 10240 - 931 0 0 102400 - 0 0 10240 - 932 0 0 102400 - 0 0 10240 - 933 0 0 102400 - 0 0 10240 - 934 0 0 102400 - 0 0 10240 - 935 0 0 102400 - 0 0 10240 - 936 0 0 102400 - 0 0 10240 - 937 0 0 102400 - 0 0 10240 - 938 0 0 102400 - 0 0 10240 - 939 0 0 102400 - 0 0 10240 - 940 0 0 102400 - 0 0 10240 - 941 0 0 102400 - 0 0 10240 - 942 0 0 102400 - 0 0 10240 - 943 0 0 102400 - 0 0 10240 - 944 0 0 102400 - 0 0 10240 - 945 0 0 102400 - 0 0 10240 - 946 0 0 102400 - 0 0 10240 - 947 0 0 102400 - 0 0 10240 - 948 0 0 102400 - 0 0 10240 - 949 0 0 102400 - 0 0 10240 - 950 0 0 102400 - 0 0 10240 - 951 0 0 102400 - 0 0 10240 - 952 0 0 102400 - 0 0 10240 - 953 0 0 102400 - 0 0 10240 - 954 0 0 102400 - 0 0 10240 - 955 0 0 102400 - 0 0 10240 - 956 0 0 102400 - 0 0 10240 - 957 0 0 102400 - 0 0 10240 - 958 0 0 102400 - 0 0 10240 - 959 0 0 102400 - 0 0 10240 - 960 0 0 102400 - 0 0 10240 - 961 0 0 102400 - 0 0 10240 - 962 0 0 102400 - 0 0 10240 - 963 0 0 102400 - 0 0 10240 - 964 0 0 102400 - 0 0 10240 - 965 0 0 102400 - 0 0 10240 - 966 0 0 102400 - 0 0 10240 - 967 0 0 102400 - 0 0 10240 - 968 0 0 102400 - 0 0 10240 - 969 0 0 102400 - 0 0 10240 - 970 0 0 102400 - 0 0 10240 - 971 0 0 102400 - 0 0 10240 - 972 0 0 102400 - 0 0 10240 - 973 0 0 102400 - 0 0 10240 - 974 0 0 102400 - 0 0 10240 - 975 0 0 102400 - 0 0 10240 - 976 0 0 102400 - 0 0 10240 - 977 0 0 102400 - 0 0 10240 - 978 0 0 102400 - 0 0 10240 - 979 0 0 102400 - 0 0 10240 - 980 0 0 102400 - 0 0 10240 - 981 0 0 102400 - 0 0 10240 - 982 0 0 102400 - 0 0 10240 - 983 0 0 102400 - 0 0 10240 - 984 0 0 102400 - 0 0 10240 - 985 0 0 102400 - 0 0 10240 - 986 0 0 102400 - 0 0 10240 - 987 0 0 102400 - 0 0 10240 - 988 0 0 102400 - 0 0 10240 - 989 0 0 102400 - 0 0 10240 - 990 0 0 102400 - 0 0 10240 - 991 0 0 102400 - 0 0 10240 - 992 0 0 102400 - 0 0 10240 - 993 0 0 102400 - 0 0 10240 - 994 0 0 102400 - 0 0 10240 - 995 0 0 102400 - 0 0 10240 - 996 0 0 102400 - 0 0 10240 - 997 0 0 102400 - 0 0 10240 - 998 0 0 102400 - 0 0 10240 - polkitd 0 0 102400 - 0 0 10240 - green 0 0 102400 - 0 0 10240 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all usr quota: 1000 / 0 seconds 903 0 0 204800 - 0 0 20480 - 904 0 0 204800 - 0 0 20480 - 905 0 0 204800 - 0 0 20480 - 906 0 0 204800 - 0 0 20480 - 907 0 0 204800 - 0 0 20480 - 908 0 0 204800 - 0 0 20480 - 909 0 0 204800 - 0 0 20480 - 910 0 0 204800 - 0 0 20480 - 911 0 0 204800 - 0 0 20480 - 912 0 0 204800 - 0 0 20480 - 913 0 0 204800 - 0 0 20480 - 914 0 0 204800 - 0 0 20480 - 915 0 0 204800 - 0 0 20480 - 916 0 0 204800 - 0 0 20480 - 917 0 0 204800 - 0 0 20480 - 918 0 0 204800 - 0 0 20480 - 919 0 0 204800 - 0 0 20480 - 920 0 0 204800 - 0 0 20480 - 921 0 0 204800 - 0 0 20480 - 922 0 0 204800 - 0 0 20480 - 923 0 0 204800 - 0 0 20480 - 924 0 0 204800 - 0 0 20480 - 925 0 0 204800 - 0 0 20480 - 926 0 0 204800 - 0 0 20480 - 927 0 0 204800 - 0 0 20480 - 928 0 0 204800 - 0 0 20480 - 929 0 0 204800 - 0 0 20480 - 930 0 0 204800 - 0 0 20480 - 931 0 0 204800 - 0 0 20480 - 932 0 0 204800 - 0 0 20480 - 933 0 0 204800 - 0 0 20480 - 934 0 0 204800 - 0 0 20480 - 935 0 0 204800 - 0 0 20480 - 936 0 0 204800 - 0 0 20480 - 937 0 0 204800 - 0 0 20480 - 938 0 0 204800 - 0 0 20480 - 939 0 0 204800 - 0 0 20480 - 940 0 0 204800 - 0 0 20480 - 941 0 0 204800 - 0 0 20480 - 942 0 0 204800 - 0 0 20480 - 943 0 0 204800 - 0 0 20480 - 944 0 0 204800 - 0 0 20480 - 945 0 0 204800 - 0 0 20480 - 946 0 0 204800 - 0 0 20480 - 947 0 0 204800 - 0 0 20480 - 948 0 0 204800 - 0 0 20480 - 949 0 0 204800 - 0 0 20480 - 950 0 0 204800 - 0 0 20480 - 951 0 0 204800 - 0 0 20480 - 952 0 0 204800 - 0 0 20480 - 953 0 0 204800 - 0 0 20480 - 954 0 0 204800 - 0 0 20480 - 955 0 0 204800 - 0 0 20480 - 956 0 0 204800 - 0 0 20480 - 957 0 0 204800 - 0 0 20480 - 958 0 0 204800 - 0 0 20480 - 959 0 0 204800 - 0 0 20480 - 960 0 0 204800 - 0 0 20480 - 961 0 0 204800 - 0 0 20480 - 962 0 0 204800 - 0 0 20480 - 963 0 0 204800 - 0 0 20480 - 964 0 0 204800 - 0 0 20480 - 965 0 0 204800 - 0 0 20480 - 966 0 0 204800 - 0 0 20480 - 967 0 0 204800 - 0 0 20480 - 968 0 0 204800 - 0 0 20480 - 969 0 0 204800 - 0 0 20480 - 970 0 0 204800 - 0 0 20480 - 971 0 0 204800 - 0 0 20480 - 972 0 0 204800 - 0 0 20480 - 973 0 0 204800 - 0 0 20480 - 974 0 0 204800 - 0 0 20480 - 975 0 0 204800 - 0 0 20480 - 976 0 0 204800 - 0 0 20480 - 977 0 0 204800 - 0 0 20480 - 978 0 0 204800 - 0 0 20480 - 979 0 0 204800 - 0 0 20480 - 980 0 0 204800 - 0 0 20480 - 981 0 0 204800 - 0 0 20480 - 982 0 0 204800 - 0 0 20480 - 983 0 0 204800 - 0 0 20480 - 984 0 0 204800 - 0 0 20480 - 985 0 0 204800 - 0 0 20480 - 986 0 0 204800 - 0 0 20480 - 987 0 0 204800 - 0 0 20480 - 988 0 0 204800 - 0 0 20480 - 989 0 0 204800 - 0 0 20480 - 990 0 0 204800 - 0 0 20480 - 991 0 0 204800 - 0 0 20480 - 992 0 0 204800 - 0 0 20480 - 993 0 0 204800 - 0 0 20480 - 994 0 0 204800 - 0 0 20480 - systemd-network 0 0 204800 - 0 0 20480 - systemd-bus-proxy 0 0 204800 - 0 0 20480 - input 0 0 204800 - 0 0 20480 - polkitd 0 0 204800 - 0 0 20480 - ssh_keys 0 0 204800 - 0 0 20480 - green 0 0 204800 - 0 0 20480 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all grp quota: 1000 / 0 seconds Create 991 files... - open/close 641 (time 1713428134.07 total 10.02 last 64.00) total: 991 open/close in 15.29 seconds: 64.81 ops/second 951 4 0 102400 - 1 0 10240 - 952 4 0 102400 - 1 0 10240 - 953 4 0 102400 - 1 0 10240 - 954 4 0 102400 - 1 0 10240 - 955 4 0 102400 - 1 0 10240 - 956 4 0 102400 - 1 0 10240 - 957 4 0 102400 - 1 0 10240 - 958 4 0 102400 - 1 0 10240 - 959 4 0 102400 - 1 0 10240 - 960 4 0 102400 - 1 0 10240 - 961 4 0 102400 - 1 0 10240 - 962 4 0 102400 - 1 0 10240 - 963 4 0 102400 - 1 0 10240 - 964 4 0 102400 - 1 0 10240 - 965 4 0 102400 - 1 0 10240 - 966 4 0 102400 - 1 0 10240 - 967 4 0 102400 - 1 0 10240 - 968 4 0 102400 - 1 0 10240 - 969 4 0 102400 - 1 0 10240 - 970 4 0 102400 - 1 0 10240 - 971 4 0 102400 - 1 0 10240 - 972 4 0 102400 - 1 0 10240 - 973 4 0 102400 - 1 0 10240 - 974 4 0 102400 - 1 0 10240 - 975 4 0 102400 - 1 0 10240 - 976 4 0 102400 - 1 0 10240 - 977 4 0 102400 - 1 0 10240 - 978 4 0 102400 - 1 0 10240 - 979 4 0 102400 - 1 0 10240 - 980 4 0 102400 - 1 0 10240 - 981 4 0 102400 - 1 0 10240 - 982 4 0 102400 - 1 0 10240 - 983 4 0 102400 - 1 0 10240 - 984 4 0 102400 - 1 0 10240 - 985 4 0 102400 - 1 0 10240 - 986 4 0 102400 - 1 0 10240 - 987 4 0 102400 - 1 0 10240 - 988 4 0 102400 - 1 0 10240 - 989 4 0 102400 - 1 0 10240 - 990 4 0 102400 - 1 0 10240 - 991 4 0 102400 - 1 0 10240 - 992 4 0 102400 - 1 0 10240 - 993 4 0 102400 - 1 0 10240 - 994 4 0 102400 - 1 0 10240 - 995 4 0 102400 - 1 0 10240 - 996 4 0 102400 - 1 0 10240 - 997 4 0 102400 - 1 0 10240 - 998 4 0 102400 - 1 0 10240 - polkitd 4 0 102400 - 1 0 10240 - green 4 0 102400 - 1 0 10240 - time=0, rate=991/0 951 4 0 204800 - 1 0 20480 - 952 4 0 204800 - 1 0 20480 - 953 4 0 204800 - 1 0 20480 - 954 4 0 204800 - 1 0 20480 - 955 4 0 204800 - 1 0 20480 - 956 4 0 204800 - 1 0 20480 - 957 4 0 204800 - 1 0 20480 - 958 4 0 204800 - 1 0 20480 - 959 4 0 204800 - 1 0 20480 - 960 4 0 204800 - 1 0 20480 - 961 4 0 204800 - 1 0 20480 - 962 4 0 204800 - 1 0 20480 - 963 4 0 204800 - 1 0 20480 - 964 4 0 204800 - 1 0 20480 - 965 4 0 204800 - 1 0 20480 - 966 4 0 204800 - 1 0 20480 - 967 4 0 204800 - 1 0 20480 - 968 4 0 204800 - 1 0 20480 - 969 4 0 204800 - 1 0 20480 - 970 4 0 204800 - 1 0 20480 - 971 4 0 204800 - 1 0 20480 - 972 4 0 204800 - 1 0 20480 - 973 4 0 204800 - 1 0 20480 - 974 4 0 204800 - 1 0 20480 - 975 4 0 204800 - 1 0 20480 - 976 4 0 204800 - 1 0 20480 - 977 4 0 204800 - 1 0 20480 - 978 4 0 204800 - 1 0 20480 - 979 4 0 204800 - 1 0 20480 - 980 4 0 204800 - 1 0 20480 - 981 4 0 204800 - 1 0 20480 - 982 4 0 204800 - 1 0 20480 - 983 4 0 204800 - 1 0 20480 - 984 4 0 204800 - 1 0 20480 - 985 4 0 204800 - 1 0 20480 - 986 4 0 204800 - 1 0 20480 - 987 4 0 204800 - 1 0 20480 - 988 4 0 204800 - 1 0 20480 - 989 4 0 204800 - 1 0 20480 - 990 4 0 204800 - 1 0 20480 - 991 4 0 204800 - 1 0 20480 - 992 4 0 204800 - 1 0 20480 - 993 4 0 204800 - 1 0 20480 - 994 4 0 204800 - 1 0 20480 - systemd-network 4 0 204800 - 1 0 20480 - systemd-bus-proxy 4 0 204800 - 1 0 20480 - input 4 0 204800 - 1 0 20480 - polkitd 4 0 204800 - 1 0 20480 - ssh_keys 4 0 204800 - 1 0 20480 - green 4 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713428150 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second Create 991 files... - open/close 833 (time 1713428172.40 total 10.01 last 83.24) total: 991 open/close in 11.59 seconds: 85.47 ops/second 951 4 0 102400 - 1 0 10240 - 952 4 0 102400 - 1 0 10240 - 953 4 0 102400 - 1 0 10240 - 954 4 0 102400 - 1 0 10240 - 955 4 0 102400 - 1 0 10240 - 956 4 0 102400 - 1 0 10240 - 957 4 0 102400 - 1 0 10240 - 958 4 0 102400 - 1 0 10240 - 959 4 0 102400 - 1 0 10240 - 960 4 0 102400 - 1 0 10240 - 961 4 0 102400 - 1 0 10240 - 962 4 0 102400 - 1 0 10240 - 963 4 0 102400 - 1 0 10240 - 964 4 0 102400 - 1 0 10240 - 965 4 0 102400 - 1 0 10240 - 966 4 0 102400 - 1 0 10240 - 967 4 0 102400 - 1 0 10240 - 968 4 0 102400 - 1 0 10240 - 969 4 0 102400 - 1 0 10240 - 970 4 0 102400 - 1 0 10240 - 971 4 0 102400 - 1 0 10240 - 972 4 0 102400 - 1 0 10240 - 973 4 0 102400 - 1 0 10240 - 974 4 0 102400 - 1 0 10240 - 975 4 0 102400 - 1 0 10240 - 976 4 0 102400 - 1 0 10240 - 977 4 0 102400 - 1 0 10240 - 978 4 0 102400 - 1 0 10240 - 979 4 0 102400 - 1 0 10240 - 980 4 0 102400 - 1 0 10240 - 981 4 0 102400 - 1 0 10240 - 982 4 0 102400 - 1 0 10240 - 983 4 0 102400 - 1 0 10240 - 984 4 0 102400 - 1 0 10240 - 985 4 0 102400 - 1 0 10240 - 986 4 0 102400 - 1 0 10240 - 987 4 0 102400 - 1 0 10240 - 988 4 0 102400 - 1 0 10240 - 989 4 0 102400 - 1 0 10240 - 990 4 0 102400 - 1 0 10240 - 991 4 0 102400 - 1 0 10240 - 992 4 0 102400 - 1 0 10240 - 993 4 0 102400 - 1 0 10240 - 994 4 0 102400 - 1 0 10240 - 995 4 0 102400 - 1 0 10240 - 996 4 0 102400 - 1 0 10240 - 997 4 0 102400 - 1 0 10240 - 998 4 0 102400 - 1 0 10240 - polkitd 4 0 102400 - 1 0 10240 - green 4 0 102400 - 1 0 10240 - time=0, rate=991/0 951 4 0 204800 - 1 0 20480 - 952 4 0 204800 - 1 0 20480 - 953 4 0 204800 - 1 0 20480 - 954 4 0 204800 - 1 0 20480 - 955 4 0 204800 - 1 0 20480 - 956 4 0 204800 - 1 0 20480 - 957 4 0 204800 - 1 0 20480 - 958 4 0 204800 - 1 0 20480 - 959 4 0 204800 - 1 0 20480 - 960 4 0 204800 - 1 0 20480 - 961 4 0 204800 - 1 0 20480 - 962 4 0 204800 - 1 0 20480 - 963 4 0 204800 - 1 0 20480 - 964 4 0 204800 - 1 0 20480 - 965 4 0 204800 - 1 0 20480 - 966 4 0 204800 - 1 0 20480 - 967 4 0 204800 - 1 0 20480 - 968 4 0 204800 - 1 0 20480 - 969 4 0 204800 - 1 0 20480 - 970 4 0 204800 - 1 0 20480 - 971 4 0 204800 - 1 0 20480 - 972 4 0 204800 - 1 0 20480 - 973 4 0 204800 - 1 0 20480 - 974 4 0 204800 - 1 0 20480 - 975 4 0 204800 - 1 0 20480 - 976 4 0 204800 - 1 0 20480 - 977 4 0 204800 - 1 0 20480 - 978 4 0 204800 - 1 0 20480 - 979 4 0 204800 - 1 0 20480 - 980 4 0 204800 - 1 0 20480 - 981 4 0 204800 - 1 0 20480 - 982 4 0 204800 - 1 0 20480 - 983 4 0 204800 - 1 0 20480 - 984 4 0 204800 - 1 0 20480 - 985 4 0 204800 - 1 0 20480 - 986 4 0 204800 - 1 0 20480 - 987 4 0 204800 - 1 0 20480 - 988 4 0 204800 - 1 0 20480 - 989 4 0 204800 - 1 0 20480 - 990 4 0 204800 - 1 0 20480 - 991 4 0 204800 - 1 0 20480 - 992 4 0 204800 - 1 0 20480 - 993 4 0 204800 - 1 0 20480 - 994 4 0 204800 - 1 0 20480 - systemd-network 4 0 204800 - 1 0 20480 - systemd-bus-proxy 4 0 204800 - 1 0 20480 - input 4 0 204800 - 1 0 20480 - polkitd 4 0 204800 - 1 0 20480 - ssh_keys 4 0 204800 - 1 0 20480 - green 4 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713428185 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second fail_loc=0xa08 fail_loc=0 Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg452-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server oleg452-server: oleg452-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions oleg452-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b5c45800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b5c45800.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 49 (228s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 50: Test if lfs find --projid works ========================================================== 04:18:26 (1713428306) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d50.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d50.sanity-quota/dir2 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 50 (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 51: Test project accounting with mv/cp ========================================================== 04:18:40 (1713428320) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d51.sanity-quota/dir 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00987773 s, 106 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 51 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 52: Rename normal file across project ID ========================================================== 04:19:02 (1713428342) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.61578 s, 170 MB/s Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102404 0 0 - 2 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4 0 0 - 1 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting rename '/mnt/lustre/d52.sanity-quota/t52_dir1' returned -1: Invalid cross-device link rename directory return 255 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4 0 0 - 1 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102404 0 0 - 2 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 52 (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 53: Project inherit attribute could be cleared ========================================================== 04:19:26 (1713428366) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -s /mnt/lustre/d53.sanity-quota/dir lfs project -C /mnt/lustre/d53.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 53 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 54: basic lfs project interface test ========================================================== 04:19:34 (1713428374) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d54.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d54.sanity-quota/f54.sanity-quota-0] [100] total: 100 create in 0.16 seconds: 629.45 ops/second lfs project -rCk /mnt/lustre/d54.sanity-quota lfs project -rC /mnt/lustre/d54.sanity-quota - unlinked 0 (time 1713428379 ; total 0 ; last 0) total: 100 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 54 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 55: Chgrp should be affected by group quota ========================================================== 04:19:46 (1713428386) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d55.sanity-quota/f55.sanity-quota] [bs=1024] [count=100000] 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 13.3464 s, 7.7 MB/s Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 51200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] chgrp: changing group of '/mnt/lustre/d55.sanity-quota/f55.sanity-quota': Disk quota exceeded 0 Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 1 0 0 - lustre-MDT0000_UUID 0 - 114688 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 55 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 56: lfs quota -t should work well === 04:20:19 (1713428419) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 56 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 57: lfs project could tolerate errors ========================================================== 04:20:29 (1713428429) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 57 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 58: project ID should be kept for new mirrors created by FID ========================================================== 04:20:46 (1713428446) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] test by mirror created with normal file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.71369 s, 30.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.09166 s, 28.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Waiting for MDT destroys to complete test by mirror created with FID running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.07873 s, 48.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 0.633854 s, 49.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 58 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 59: lfs project dosen't crash kernel with project disabled ========================================================== 04:21:35 (1713428495) Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:) Stopping client oleg452-client.virtnet /mnt/lustre opts: Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11216) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg452-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42096/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions oleg452-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a81aa800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a81aa800.idle_timeout=debug Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs: failed to set xattr for '/mnt/lustre/d59.sanity-quota/f59.sanity-quota-0': Operation not supported Stopping clients: oleg452-client.virtnet /mnt/lustre (opts:) Stopping client oleg452-client.virtnet /mnt/lustre opts: Stopping clients: oleg452-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg452-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg452-server tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11216) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg452-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg452-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42096/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.52,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg452-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg452-server' oleg452-server: oleg452-server.virtnet: executing load_modules_local oleg452-server: Loading modules from /home/green/git/lustre-release/lustre oleg452-server: detected 4 online CPUs by sysfs oleg452-server: Force libcfs to create 2 CPU partitions oleg452-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Starting client oleg452-client.virtnet: -o user_xattr,flock oleg452-server@tcp:/lustre /mnt/lustre Started clients oleg452-client.virtnet: 192.168.204.152@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b5c44800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b5c44800.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 59 (143s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 60: Test quota for root with setgid ========================================================== 04:23:59 (1713428639) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' lfs setquota: warning: inode hardlimit '100' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 100 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d60.sanity-quota/f60.sanity-quota] [99] total: 99 create in 0.21 seconds: 470.15 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] touch: cannot touch '/mnt/lustre/d60.sanity-quota/foo': Disk quota exceeded running as uid/gid/euid/egid 0/0/0/0, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 60 (17s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_61 skipping SLOW test 61 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 62: Project inherit should be only changed by root ========================================================== 04:24:19 (1713428659) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [-p] [/mnt/lustre/d62.sanity-quota/] lfs project -s /mnt/lustre/d62.sanity-quota/ running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [chattr] [-P] [/mnt/lustre/d62.sanity-quota/] chattr: Operation not permitted while setting flags on /mnt/lustre/d62.sanity-quota/ Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 62 (7s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_63 skipping excluded test 63 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 64: lfs project on non dir/files should succeed ========================================================== 04:24:29 (1713428669) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 64 (14s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_65 skipping excluded test 65 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 66: nonroot user can not change project state in default ========================================================== 04:24:45 (1713428685) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 mdt.lustre-MDT0000.enable_chprojid_gid=0 mdt.lustre-MDT0001.enable_chprojid_gid=0 lfs project -sp 1000 /mnt/lustre/d66.sanity-quota/foo running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [0] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-C] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted lfs project -C /mnt/lustre/d66.sanity-quota/foo/foo mdt.lustre-MDT0000.enable_chprojid_gid=-1 mdt.lustre-MDT0001.enable_chprojid_gid=-1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-rC] [/mnt/lustre/d66.sanity-quota/foo/] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/bar] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/bar': Operation not permitted lfs project -p 1000 /mnt/lustre/d66.sanity-quota/foo/bar mdt.lustre-MDT0000.enable_chprojid_gid=0 mdt.lustre-MDT0001.enable_chprojid_gid=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 66 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 67: quota pools recalculation ======= 04:25:04 (1713428704) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) granted 0x0 before write 0 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 osd-ldiskfs.lustre-OST0001.quota_slave.force_reint=1 affected facets: ost1 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg452-server: *.lustre-OST0000.recovery_status status: INACTIVE affected facets: ost2 oleg452-server: oleg452-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg452-server: *.lustre-OST0001.recovery_status status: INACTIVE file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-0 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-0 2 user 3 10 4 quota_usr Write... Thu Apr 18 04:25:14 EDT 2024 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0853837 s, 123 MB/s Thu Apr 18 04:25:14 EDT 2024 Thu Apr 18 04:25:14 EDT 2024 Thu Apr 18 04:25:15 EDT 2024 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 global granted 11264 qpool1 granted 0 Adding targets to pool oleg452-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Updated after 2s: want 'lustre-OST0000_UUID lustre-OST0001_UUID ' got 'lustre-OST0000_UUID lustre-OST0001_UUID ' Granted 11 MB file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-1 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-1 2 user 3 10 4 quota_2usr Write... Thu Apr 18 04:25:26 EDT 2024 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0855872 s, 123 MB/s Thu Apr 18 04:25:26 EDT 2024 Thu Apr 18 04:25:26 EDT 2024 Thu Apr 18 04:25:27 EDT 2024 granted_mb 10 file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 2 user 3 10 4 quota_2usr Write... Thu Apr 18 04:25:29 EDT 2024 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-2] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0737867 s, 142 MB/s Thu Apr 18 04:25:29 EDT 2024 Thu Apr 18 04:25:30 EDT 2024 Thu Apr 18 04:25:31 EDT 2024 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 granted_mb 20 Removing lustre-OST0000_UUID from qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 67 (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 68: slave number in quota pool changed after each add/remove OST ========================================================== 04:26:10 (1713428770) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 nr result 4 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Adding targets to pool oleg452-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Removing lustre-OST0000_UUID from qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Removing lustre-OST0001_UUID from qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 68 (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 69: EDQUOT at one of pools shouldn't affect DOM ========================================================== 04:26:40 (1713428800) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 User quota (block hardlimit:200 MB) User quota (block hardlimit:10 MB) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 1.29769 s, 404 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.23684 s, 234 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0974418 s, 108 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0261233 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.03056 s, 258 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.1654 s, 242 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 69 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70a: check lfs setquota/quota with a pool option ========================================================== 04:27:19 (1713428839) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 hard limit 20480 limit 20 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 20480 - 0 0 0 - Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 70a (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70b: lfs setquota pool works properly ========================================================== 04:27:36 (1713428856) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' PASS 70b (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71a: Check PFL with quota pools ===== 04:27:54 (1713428874) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:100 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg452-server: Pool lustre.qpool2 created Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0716359 s, 146 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 8+0 records in 7+0 records out 8343552 bytes (8.3 MB) copied, 0.0630386 s, 132 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0183139 s, 0.0 kB/s Waiting for MDT destroys to complete running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0636038 s, 165 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=20] [seek=10] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.120185 s, 174 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=1] [seek=30] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': No data available 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00304196 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=0] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0691952 s, 152 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 71a (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71b: Check SEL with quota pools ===== 04:28:48 (1713428928) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:1000 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg452-server: Pool lustre.qpool2 created Waiting 90s for '' Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=128] 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 0.823902 s, 163 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=5] [seek=128] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0527589 s, 99.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=5] [seek=133] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0422408 s, 124 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=2] [seek=138] dd: error writing '/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0123898 s, 0.0 kB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Updated after 2s: want 'foo' got 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 71b (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 72: lfs quota --pool prints only pool's OSTs ========================================================== 04:29:28 (1713428968) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:50 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0399978 s, 131 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0305285 s, 172 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0146478 s, 0.0 kB/s used 10240 Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 72 (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73a: default limits at OST Pool Quotas ========================================================== 04:29:59 (1713428999) Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 LIMIT=20480 TESTFILE=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0 qdtype=-U qh=-B qid=quota_usr qprjid=1000 qres_type=data qs=-b qtype=-u Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 set to use default quota lfs setquota: '-d' deprecated, use '-D' or '--default' set default quota get default quota Disk default usr quota: Filesystem bquota blimit bgrace iquota ilimit igrace /mnt/lustre 0 0 10 0 0 10 Test not out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=10] [oflag=sync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.564076 s, 18.6 MB/s Test out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.16548 s, 18.0 MB/s Increase default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 2.20126 s, 19.1 MB/s Set quota to override default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.18409 s, 17.7 MB/s Set to use default quota again lfs setquota: '-d' deprecated, use '-D' or '--default' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 2.50998 s, 16.7 MB/s Cleanup Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed PASS 73a (55s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73b: default OST Pool Quotas limit for new user ========================================================== 04:30:56 (1713429056) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 set default quota for qpool1 Write from user that hasn't lqe running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73b.sanity-quota/f73b.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.325036 s, 32.3 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 73b (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 74: check quota pools per user ====== 04:31:29 (1713429089) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg452-server: Pool lustre.qpool2 created Adding targets to pool oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting 90s for 'lustre-OST0001_UUID ' pool limit for qpool1 10240 pool limit for qpool2 51200 Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 74 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 75: nodemap squashed root respects quota enforcement ========================================================== 04:32:02 (1713429122) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 On MGS 192.168.204.152, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.204.152, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.204.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.204.152, default.squash_uid = nodemap.default.squash_uid=60000 waiting 10 secs for sync 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.296184 s, 35.4 MB/s Write to exceed soft limit 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0159205 s, 643 kB/s mmap write when over soft limit Waiting for MDT destroys to complete Write... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.294758 s, 35.6 MB/s Write out of block quota ... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.322797 s, 32.5 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/f75.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0512946 s, 0.0 kB/s Waiting for MDT destroys to complete 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0493435 s, 21.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0490067 s, 21.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0501514 s, 20.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0512088 s, 20.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0464386 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0471378 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0483147 s, 21.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0479144 s, 21.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0478056 s, 21.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438767 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0412989 s, 25.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0443716 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0436904 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.043727 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0513411 s, 20.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0528511 s, 19.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0576711 s, 18.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0549268 s, 19.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0501932 s, 20.9 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0425965 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-20': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0416413 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-21': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0419945 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-22': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0437674 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-23': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0479546 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-24': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0436617 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-25': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0416608 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-26': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0409523 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-27': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0388338 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-28': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0367514 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-29': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0364438 s, 0.0 kB/s 9+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.311725 s, 30.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0417026 s, 25.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0470314 s, 22.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0448548 s, 23.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0475417 s, 22.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0446536 s, 23.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0461185 s, 22.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0504313 s, 20.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0500535 s, 20.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0490789 s, 21.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.046385 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0473204 s, 22.2 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-11': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0454981 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-12': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0458957 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-13': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0406584 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-14': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0419305 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-15': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0425802 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-16': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.039013 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-17': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0429212 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-18': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0399724 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0408944 s, 0.0 kB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438384 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0511577 s, 20.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0452712 s, 23.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0504064 s, 20.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0509257 s, 20.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0520187 s, 20.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0485652 s, 21.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0486666 s, 21.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0537617 s, 19.5 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/file': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.220887 s, 42.7 MB/s On MGS 192.168.204.152, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.204.152, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.204.152, active = nodemap.active=0 waiting 10 secs for sync Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 75 (130s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 76: project ID 4294967295 should be not allowed ========================================================== 04:34:14 (1713429254) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Invalid project ID: 4294967295 Change or list project attribute for specified file or directory. usage: project [-d|-r] list project ID and flags on file(s) or directories project [-p id] [-s] [-r] set project ID and/or inherit flag for specified file(s) or directories project -c [-d|-r [-p id] [-0]] check project ID and flags on file(s) or directories, print outliers project -C [-d|-r] [-k] clear the project inherit flag and ID on the file or directory Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 76 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 77: lfs setquota should fail in Lustre mount with 'ro' ========================================================== 04:34:29 (1713429269) Starting client: oleg452-client.virtnet: -o ro oleg452-server@tcp:/lustre /mnt/lustre2 lfs setquota: quotactl failed: Read-only file system setquota failed: Read-only file system PASS 77 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78A: Check fallocate increase quota usage ========================================================== 04:34:33 (1713429273) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l] [204800] [/mnt/lustre/d78A.sanity-quota/f78A.sanity-quota] kbytes returned:204 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 78A (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78a: Check fallocate increase projectid usage ========================================================== 04:34:49 (1713429289) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 5200 /mnt/lustre/d78a.sanity-quota kbytes returned:204 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 78a (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 79: access to non-existed dt-pool/info doesn't cause a panic ========================================================== 04:35:04 (1713429304) /tmp/f79.sanity-quota Creating new pool oleg452-server: Pool lustre.qpool1 created Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed PASS 79 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 80: check for EDQUOT after OST failover ========================================================== 04:35:14 (1713429314) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' /mnt/lustre/d80.sanity-quota/dir1 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 1 /mnt/lustre/d80.sanity-quota/dir2 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8 0 102400 - 2 0 0 - lustre-MDT0000_UUID 8 - 16384 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_loc=0xa06 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir2/f80.sanity-quota-0] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.0476791 s, 66.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-2] [count=7] 7+0 records in 7+0 records out 7340032 bytes (7.3 MB) copied, 0.089195 s, 82.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-1] [count=1] [oflag=direct] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0188328 s, 55.7 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11272* 0 10240 - 5 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 8192* - 8192 - - - - - Total allocated inode limit: 0, total allocated block limit: 12288 Stopping /mnt/lustre-ost2 (opts:) on oleg452-server fail_loc=0 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-OST0001 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 8192 - - - - - Total allocated inode limit: 0, total allocated block limit: 12288 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-0] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0209405 s, 100 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 80 (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 81: Race qmt_start_pool_recalc with qmt_pool_free ========================================================== 04:35:59 (1713429359) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg452-server: Pool lustre.qpool1 created fail_loc=0x80000A07 fail_val=10 Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Stopping /mnt/lustre-mds1 (opts:-f) on oleg452-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg452-server: oleg452-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg452-client: oleg452-server: ssh exited with exit code 1 Started lustre-MDT0000 Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 81 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 82: verify more than 8 qids for single operation ========================================================== 04:36:32 (1713429392) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 82 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 83: Setting default quota shouldn't affect grace time ========================================================== 04:36:39 (1713429399) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 83 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 84: Reset quota should fix the insane granted quota ========================================================== 04:36:45 (1713429405) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10485760 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 0 /mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 130 0x82 0x280000401 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=60] [conv=nocreat] [oflag=direct] 60+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 1.07518 s, 58.5 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 10485760 - 2 0 0 - lustre-MDT0000_UUID 4 - 1048576 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 fail_val=0 fail_loc=0xa08 fail_val=0 fail_loc=0xa08 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 0 - 2 0 0 - lustre-MDT0000_UUID 4 - 18446744073707374604 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 fail_val=0 fail_loc=0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 0 - 2 0 0 - lustre-MDT0000_UUID 4 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 102400 - 2 0 0 - lustre-MDT0000_UUID 4* - 4 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440* - 61440 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 61440 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] dd: error writing '/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1': Disk quota exceeded 100+0 records in 99+0 records out 103809024 bytes (104 MB) copied, 2.20344 s, 47.1 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 101380 0 307200 - 2 0 0 - lustre-MDT0000_UUID 4* - 4 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 101376 - 102396 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 102396 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 4.57683 s, 45.8 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 84 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 85: do not hung at write with the least_qunit ========================================================== 04:37:33 (1713429453) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg452-server: Pool lustre.qpool1 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Creating new pool oleg452-server: Pool lustre.qpool2 created Adding targets to pool oleg452-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0] [count=10] dd: error writing '/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0': Disk quota exceeded 8+0 records in 7+0 records out 8368128 bytes (8.4 MB) copied, 0.298877 s, 28.0 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg452-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' lustre.qpool2 oleg452-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg452-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg452-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 85 (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 86: Pre-acquired quota should be released if quota is over limit ========================================================== 04:38:13 (1713429493) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 4460 (time 1713429506.64 total 10.00 last 445.88) total: 5000 create in 11.31 seconds: 441.92 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 3398 (time 1713429568.06 total 10.00 last 339.68) total: 5000 create in 17.86 seconds: 280.03 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second lfs project -sp 1000 /mnt/lustre/d86.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 3960 (time 1713429632.70 total 10.00 last 395.97) total: 5000 create in 13.16 seconds: 379.80 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] total: 10 create in 0.03 seconds: 289.55 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] total: 30 create in 0.12 seconds: 240.09 ops/second sanity-quota test_86: @@@@@@ FAIL: succeed to create files, expect failed Trace dump: = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7011:error() = /home/green/git/lustre-release/lustre/tests/sanity-quota.sh:6376:test_preacquired_quota() = /home/green/git/lustre-release/lustre/tests/sanity-quota.sh:6400:test_86() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7351:run_one() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7411:run_one_logged() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7237:run_test() = /home/green/git/lustre-release/lustre/tests/sanity-quota.sh:6402:main() Dumping lctl log to /tmp/testlogs//sanity-quota.test_86.*.1713429641.log Delete files... rsync: chown "/tmp/testlogs/.sanity-quota.test_86.debug_log.oleg452-server.1713429641.log.WuvcGA" failed: Operation not permitted (1) rsync: chown "/tmp/testlogs/.sanity-quota.test_86.dmesg.oleg452-server.1713429641.log.9k121r" failed: Operation not permitted (1) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1651) [generator=3.1.2] Wait for unlink objects finished... Waiting for MDT destroys to complete FAIL 86 (192s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity-quota test complete, duration 4676 sec ========= 04:41:28 (1713429688) sanity-quota: FAIL: test_86 succeed to create files, expect failed === sanity-quota: start cleanup 04:41:28 (1713429688) === === sanity-quota: finish cleanup 04:41:28 (1713429688) ===