-----============= acceptance-small: sanity-quota ============----- Thu Apr 18 04:40:34 EDT 2024 excepting tests: 2 4a 63 65 skipping tests SLOW=no: 61 oleg432-server: debugfs 1.46.2.wc5 (26-Mar-2022) pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 === sanity-quota: start setup 04:40:39 (1713429639) === oleg432-client.virtnet: executing check_config_client /mnt/lustre oleg432-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg432-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800ae4c2000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800ae4c2000.idle_timeout=debug oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 === sanity-quota: finish setup 04:40:47 (1713429647) === using SAVE_PROJECT_SUPPORTED=0 oleg432-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg432-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg432-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg432-server: debugfs 1.46.2.wc5 (26-Mar-2022) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [true] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d0_runas_test/f7516] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [true] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [touch] [/mnt/lustre/d0_runas_test/f7516] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 0: Test basic quota performance ===== 04:41:01 (1713429661) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.387987 s, 27.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.388315 s, 27.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 0 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1a: Block hard limit (normal use and out of quota) ========================================================== 04:41:21 (1713429681) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.182133 s, 28.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.207202 s, 25.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0586841 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.172537 s, 30.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.167009 s, 31.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0540317 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:10 mb) lfs project -p 1000 /mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.169433 s, 30.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.163694 s, 32.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.056839 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1a (68s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1b: Quota pools: Block hard limit (normal use and out of quota) ========================================================== 04:42:31 (1713429751) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.175702 s, 29.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.16266 s, 32.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.053476 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:20 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.182853 s, 28.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.164575 s, 31.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0499785 s, 0.0 kB/s Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.157269 s, 33.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.160958 s, 32.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.06309 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1b (79s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1c: Quota pools: check 3 pools with hardlimit only for global ========================================================== 04:43:52 (1713429832) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.314737 s, 33.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.270193 s, 38.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0537146 s, 0.0 kB/s qpool1 used 20484 qpool2 used 20484 Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1c (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1d: Quota pools: check block hardlimit on different pools ========================================================== 04:44:46 (1713429886) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.17274 s, 30.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.146431 s, 35.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0459934 s, 0.0 kB/s qpool1 used 10240 qpool2 used 10240 Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1d (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1e: Quota pools: global pool high block limit vs quota pool with small ========================================================== 04:45:36 (1713429936) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:53000000 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.165841 s, 31.6 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.154769 s, 33.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0539062 s, 0.0 kB/s Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-1] [count=20] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.595909 s, 35.2 MB/s Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1e (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1f: Quota pools: correct qunit after removing/adding OST ========================================================== 04:46:14 (1713429974) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.157895 s, 33.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.1578 s, 33.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0478277 s, 0.0 kB/s Removing lustre-OST0000_UUID from qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Waiting for MDT destroys to complete Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.181226 s, 28.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.164078 s, 32.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0433162 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1f (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1g: Quota pools: Block hard limit with wide striping ========================================================== 04:47:06 (1713430026) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 osc.lustre-OST0000-osc-ffff8800ae4c2000.max_dirty_mb=1 osc.lustre-OST0001-osc-ffff8800ae4c2000.max_dirty_mb=1 User quota (block hardlimit:40 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.62941 s, 6.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 2.77564 s, 3.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=6] [seek=20] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0614818 s, 0.0 kB/s Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed osc.lustre-OST0000-osc-ffff8800ae4c2000.max_dirty_mb=467 osc.lustre-OST0001-osc-ffff8800ae4c2000.max_dirty_mb=467 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1g (42s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1h: Block hard limit test using fallocate ========================================================== 04:47:50 (1713430070) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write 5MiB Using Fallocate running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l5MiB] [/mnt/lustre/d1h.sanity-quota/f1h.sanity-quota-0] Write 11MiB Using Fallocate running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l11MiB] [/mnt/lustre/d1h.sanity-quota/f1h.sanity-quota-0] fallocate: fallocate failed: Disk quota exceeded Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1h (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1i: Quota pools: different limit and usage relations ========================================================== 04:48:13 (1713430093) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.159498 s, 32.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.164274 s, 31.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.051698 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10240 0 0 - 1 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 10240* - 10240 - - - - - Total allocated inode limit: 0, total allocated block limit: 10240 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.164993 s, 31.8 MB/s Waiting for MDT destroys to complete Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.154467 s, 33.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.155623 s, 33.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0479251 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.112591 s, 27.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.0912862 s, 34.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [seek=3] [count=1] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0458662 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1i (55s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1j: Enable project quota enforcement for root ========================================================== 04:49:10 (1713430150) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0 osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=1 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.42704 s, 46.7 MB/s running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=10] [seek=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0228865 s, 0.0 kB/s osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [seek=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.397202 s, 52.8 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete osd-ldiskfs.lustre-OST0000.quota_slave.root_prj_enable=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 1j (18s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_2 skipping excluded test 2 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3a: Block soft limit (start timer, timer goes off, stop timer) ========================================================== 04:49:30 (1713430170) User quota (soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0991631 s, 42.3 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00785022 s, 1.3 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148* 4096 0 19s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00393762 s, 2.6 MB/s Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.637596 s, 6.6 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00322108 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8260* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8220* - 8220 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8268 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8260 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8220 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 4096 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.146134 s, 28.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Group quota (soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.11103 s, 37.8 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00982902 s, 1.0 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148* 4096 0 19s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00410657 s, 2.5 MB/s Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.708245 s, 5.9 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00546405 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256* - 8256 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8256 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 4096 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 1064 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1064 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.121018 s, 34.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Project quota (soft limit:4 MB grace:20 sec) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.117574 s, 35.7 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00838121 s, 1.2 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4108* 4096 0 19s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4144 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4144 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00361147 s, 2.8 MB/s Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4144 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4144 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.643917 s, 6.5 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00526108 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8260 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8220 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8260 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8220 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8220* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8220* - 8220 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8220 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 4096 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w Block grace time: 20s; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.108327 s, 38.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 3a (149s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3b: Quota pools: Block soft limit (start timer, expires, stop timer) ========================================================== 04:52:01 (1713430321) limit 4 glbl_limit 8 grace 20 glbl_grace 40 User quota in qpool1(soft limit:4 MB grace:20 seconds) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.119484 s, 35.1 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0112203 s, 913 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00463519 s, 2.2 MB/s Quota info for qpool1: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 19s 2 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.709789 s, 5.9 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00448367 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 38s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256* - 8256 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8256 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 1064 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1064 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.127104 s, 33.0 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Group quota in qpool1(soft limit:4 MB grace:20 seconds) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.109198 s, 38.4 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.009644 s, 1.1 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00332053 s, 3.1 MB/s Quota info for qpool1: Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160* 4096 0 19s 2 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 8192 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4160 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 4208 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.719805 s, 5.8 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00567088 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 38s 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8264 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.141222 s, 29.7 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Project quota in qpool1(soft:4 MB grace:20 sec) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.109756 s, 38.2 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00960515 s, 1.1 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4148 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4148 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4108 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4108 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00398662 s, 2.6 MB/s Quota info for qpool1: Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120* 4096 0 18s 1 0 0 - Grace time is 18s Sleep through grace ... ...sleep 23 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4160 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4160 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4120 8192 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 4120 - 4176 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 4176 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] 4096+0 records in 4096+0 records out 4194304 bytes (4.2 MB) copied, 0.715166 s, 5.9 MB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00507213 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8256 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8216* 8192 0 39s 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216* - 8216 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 8216 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 40 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w Block grace time: 40s; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.12974 s, 32.3 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed PASS 3b (169s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3c: Quota pools: check block soft limit on different pools ========================================================== 04:54:51 (1713430491) limit 4 limit2 8 glbl_limit 12 grace1 30 grace2 20 glbl_grace 40 User quota in qpool2(soft:8 MB grace:20 seconds) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Waiting 90s for '' Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.241425 s, 34.7 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=8192] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00696738 s, 1.5 MB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8244 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8204 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8244 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8204 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=9216] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00343485 s, 3.0 MB/s Quota info for qpool2: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256* 8192 0 19s 2 0 0 - Grace time is 19s Sleep through grace ... ...sleep 24 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=4096] [seek=10240] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00500075 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=14336] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00330189 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 12288 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 8224 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 8272 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8256 0 0 - 2 0 0 - lustre-MDT0000_UUID 0 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8216 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 12288 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 48 - - - - - Total allocated inode limit: 0, total allocated block limit: 48 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 40 0 0 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 40 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 40s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.238352 s, 35.2 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed PASS 3c (73s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_4a skipping excluded test 4a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 4b: Grace time strings handling ===== 04:56:06 (1713430566) Waiting for MDT destroys to complete Valid grace strings test Block grace time: 1w3d; Inode grace time: 16m40s Block grace time: 5s; Inode grace time: 1w2d3h4m5s Invalid grace strings test lfs: bad inode-grace: 5c setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: 18446744073709551615 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: -1 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 4b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 5: Chown & chgrp successfully even out of block/file quota ========================================================== 04:56:09 (1713430569) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 3s: want 'ugp' got 'ugp' Set quota limit (0 10M 0 10) for quota_usr.quota_usr lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Create more than 10 files and more than 10 MB ... total: 11 create in 0.02 seconds: 477.68 ops/second lfs project -p 1000 /mnt/lustre/d5.sanity-quota/f5.sanity-quota-0_1 11+0 records in 11+0 records out 11534336 bytes (12 MB) copied, 0.248185 s, 46.5 MB/s Chown files to quota_usr.quota_usr ... - unlinked 0 (time 1713430580 ; total 0 ; last 0) total: 11 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 5 (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 6: Test dropping acquire request on master ========================================================== 04:56:30 (1713430590) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0388538 s, 27.0 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0298881 s, 35.1 MB/s at_max=20 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 3+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.117984 s, 17.8 MB/s Waiting for MDT destroys to complete fail_val=601 fail_loc=0x513 osd-ldiskfs.lustre-OST0000.quota_slave.timeout=10 osd-ldiskfs.lustre-OST0001.quota_slave.timeout=10 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.206078 s, 15.3 MB/s Sleep for 41 seconds ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] at_max=600 fail_val=0 fail_loc=0 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 55.271 s, 56.9 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 6 (83s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7a: Quota reintegration (global index) ========================================================== 04:57:55 (1713430675) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg432-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Start ost1... Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.30141 s, 17.4 MB/s Waiting for MDT destroys to complete Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg432-server Start ost1... Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.434406 s, 14.5 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7a (54s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7b: Quota reintegration (slave index) ========================================================== 04:58:50 (1713430730) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0747212 s, 14.0 MB/s fail_val=0 fail_loc=0xa02 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [seek=1] [oflag=sync] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0728508 s, 14.4 MB/s fail_val=0 fail_loc=0 Restart ost to trigger reintegration... Stopping /mnt/lustre-ost1 (opts:) on oleg432-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7b (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7c: Quota reintegration (restart mds during reintegration) ========================================================== 04:59:27 (1713430767) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' fail_val=0 fail_loc=0xa03 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 osd-ldiskfs.lustre-OST0001.quota_slave.force_reint=1 Stop mds... Stopping /mnt/lustre-mds1 (opts:) on oleg432-server fail_val=0 fail_loc=0 Start mds... Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE Waiting 200s for 'glb[1],slv[1],reint[0]' Waiting 190s for 'glb[1],slv[1],reint[0]' Waiting 180s for 'glb[1],slv[1],reint[0]' Waiting 170s for 'glb[1],slv[1],reint[0]' Waiting 160s for 'glb[1],slv[1],reint[0]' Waiting 140s for 'glb[1],slv[1],reint[0]' Waiting 130s for 'glb[1],slv[1],reint[0]' Waiting 120s for 'glb[1],slv[1],reint[0]' Waiting 110s for 'glb[1],slv[1],reint[0]' Waiting 100s for 'glb[1],slv[1],reint[0]' Waiting 90s for 'glb[1],slv[1],reint[0]' Updated after 113s: want 'glb[1],slv[1],reint[0]' got 'glb[1],slv[1],reint[0]' affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.336891 s, 15.6 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7c (142s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7d: Quota reintegration (Transfer index in multiple bulks) ========================================================== 05:01:50 (1713430910) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' fail_val=0 fail_loc=0x608 affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.40994 s, 14.9 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1': Disk quota exceeded 20+0 records in 19+0 records out 20963328 bytes (21 MB) copied, 1.89501 s, 11.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7d (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7e: Quota reintegration (inode limits) ========================================================== 05:02:16 (1713430936) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Stop mds2... Stopping /mnt/lustre-mds2 (opts:) on oleg432-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Start mds2... Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg432-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg432-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg432-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg432-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg432-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg432-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg432-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg432-server: *.lustre-MDT0001.recovery_status status: COMPLETE create remote dir running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] mknod(/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota2048) error: Disk quota exceeded total: 2048 create in 4.25 seconds: 481.92 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2048] - unlinked 0 (time 1713430965 ; total 0 ; last 0) total: 2048 unlinks in 9 seconds: 227.555557 unlinks/second Waiting for MDT destroys to complete Stop mds2... Stopping /mnt/lustre-mds2 (opts:) on oleg432-server Start mds2... Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg432-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg432-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg432-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg432-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg432-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg432-server: *.lustre-MDT0001.recovery_status status: COMPLETE affected facets: mds1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg432-server: *.lustre-MDT0000.recovery_status status: COMPLETE affected facets: mds2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg432-server: *.lustre-MDT0001.recovery_status status: COMPLETE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] total: 2049 create in 4.12 seconds: 496.94 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d7e.sanity-quota-1/f7e.sanity-quota] [2049] - unlinked 0 (time 1713430995 ; total 0 ; last 0) total: 2049 unlinks in 9 seconds: 227.666672 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 7e (71s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 8: Run dbench with quota enabled ==== 05:03:29 (1713431009) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set enough high limit for user: quota_usr Set enough high limit for group: quota_usr lfs project -sp 1000 /mnt/lustre/d8.sanity-quota Set enough high limit for project: 1000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [bash] [rundbench] [-D] [/mnt/lustre/d8.sanity-quota] [3] [-t] [120] looking for dbench program /usr/bin/dbench found dbench client file /usr/share/dbench/client.txt '/usr/share/dbench/client.txt' -> 'client.txt' running 'dbench 3 -t 120' on /mnt/lustre/d8.sanity-quota at Thu Apr 18 05:03:33 EDT 2024 waiting for dbench pid 25795 dbench version 4.00 - Copyright Andrew Tridgell 1999-2004 Running for 120 seconds with load 'client.txt' and minimum warmup 24 secs failed to create barrier semaphore 2 of 3 processes prepared for launch 0 sec 3 of 3 processes prepared for launch 0 sec releasing clients 3 309 34.06 MB/sec warmup 1 sec latency 16.608 ms 3 660 31.69 MB/sec warmup 2 sec latency 19.654 ms 3 1011 22.40 MB/sec warmup 3 sec latency 13.925 ms 3 1483 19.53 MB/sec warmup 4 sec latency 18.374 ms 3 1919 15.85 MB/sec warmup 5 sec latency 13.401 ms 3 2396 14.00 MB/sec warmup 6 sec latency 13.997 ms 3 2961 14.36 MB/sec warmup 7 sec latency 15.013 ms 3 3600 14.45 MB/sec warmup 8 sec latency 17.781 ms 3 3947 13.35 MB/sec warmup 9 sec latency 11.903 ms 3 4280 12.11 MB/sec warmup 10 sec latency 44.148 ms 3 4646 11.47 MB/sec warmup 11 sec latency 21.526 ms 3 5119 11.31 MB/sec warmup 12 sec latency 19.428 ms 3 5522 10.52 MB/sec warmup 13 sec latency 17.434 ms 3 5968 10.10 MB/sec warmup 14 sec latency 12.721 ms 3 6527 10.51 MB/sec warmup 15 sec latency 16.734 ms 3 7168 10.82 MB/sec warmup 16 sec latency 10.808 ms 3 7512 10.44 MB/sec warmup 17 sec latency 11.023 ms 3 7831 9.92 MB/sec warmup 18 sec latency 35.218 ms 3 8204 9.66 MB/sec warmup 19 sec latency 13.905 ms 3 8681 9.65 MB/sec warmup 20 sec latency 12.692 ms 3 9103 9.25 MB/sec warmup 21 sec latency 13.708 ms 3 9551 9.09 MB/sec warmup 22 sec latency 11.168 ms 3 10231 9.43 MB/sec warmup 23 sec latency 13.942 ms 3 11104 3.48 MB/sec execute 1 sec latency 12.021 ms 3 11463 2.54 MB/sec execute 2 sec latency 25.761 ms 3 11813 3.21 MB/sec execute 3 sec latency 31.789 ms 3 12294 4.80 MB/sec execute 4 sec latency 18.681 ms 3 12728 4.09 MB/sec execute 5 sec latency 15.344 ms 3 13157 4.41 MB/sec execute 6 sec latency 13.363 ms 3 13803 6.38 MB/sec execute 7 sec latency 23.191 ms 3 14296 7.08 MB/sec execute 8 sec latency 22.096 ms 3 14626 6.76 MB/sec execute 9 sec latency 11.777 ms 3 14965 6.21 MB/sec execute 10 sec latency 25.140 ms 3 15346 6.09 MB/sec execute 11 sec latency 26.061 ms 3 15826 6.37 MB/sec execute 12 sec latency 17.982 ms 3 16276 5.99 MB/sec execute 13 sec latency 16.000 ms 3 16729 6.00 MB/sec execute 14 sec latency 16.058 ms 3 17357 6.81 MB/sec execute 15 sec latency 20.316 ms 3 17883 7.25 MB/sec execute 16 sec latency 12.262 ms 3 18184 6.95 MB/sec execute 17 sec latency 13.175 ms 3 18549 6.65 MB/sec execute 18 sec latency 33.938 ms 3 18907 6.55 MB/sec execute 19 sec latency 24.252 ms 3 19358 6.69 MB/sec execute 20 sec latency 12.896 ms 3 19791 6.43 MB/sec execute 21 sec latency 21.306 ms 3 20225 6.40 MB/sec execute 22 sec latency 13.632 ms 3 20881 6.80 MB/sec execute 23 sec latency 13.300 ms 3 21405 7.14 MB/sec execute 24 sec latency 13.268 ms 3 21728 7.02 MB/sec execute 25 sec latency 12.411 ms 3 22071 6.80 MB/sec execute 26 sec latency 38.872 ms 3 22435 6.69 MB/sec execute 27 sec latency 14.304 ms 3 22903 6.83 MB/sec execute 28 sec latency 19.492 ms 3 23299 6.64 MB/sec execute 29 sec latency 15.270 ms 3 23715 6.54 MB/sec execute 30 sec latency 18.194 ms 3 24325 6.84 MB/sec execute 31 sec latency 12.305 ms 3 24855 7.05 MB/sec execute 32 sec latency 15.427 ms 3 25202 7.05 MB/sec execute 33 sec latency 13.187 ms 3 25493 6.87 MB/sec execute 34 sec latency 26.133 ms 3 25803 6.73 MB/sec execute 35 sec latency 22.229 ms 3 26224 6.81 MB/sec execute 36 sec latency 14.755 ms 3 26594 6.73 MB/sec execute 37 sec latency 15.574 ms 3 27017 6.59 MB/sec execute 38 sec latency 16.119 ms 3 27481 6.72 MB/sec execute 39 sec latency 17.270 ms 3 28077 6.85 MB/sec execute 40 sec latency 12.882 ms 3 28565 7.03 MB/sec execute 41 sec latency 15.965 ms 3 28863 6.91 MB/sec execute 42 sec latency 12.525 ms 3 29188 6.78 MB/sec execute 43 sec latency 32.590 ms 3 29521 6.71 MB/sec execute 44 sec latency 31.414 ms 3 29918 6.73 MB/sec execute 45 sec latency 18.932 ms 3 30358 6.68 MB/sec execute 46 sec latency 16.696 ms 3 30793 6.63 MB/sec execute 47 sec latency 12.846 ms 3 31435 6.78 MB/sec execute 48 sec latency 13.124 ms 3 32031 6.99 MB/sec execute 49 sec latency 14.917 ms 3 32380 6.95 MB/sec execute 50 sec latency 12.112 ms 3 32737 6.85 MB/sec execute 51 sec latency 30.199 ms 3 33109 6.79 MB/sec execute 52 sec latency 17.006 ms 3 33554 6.82 MB/sec execute 53 sec latency 15.829 ms 3 33996 6.75 MB/sec execute 54 sec latency 13.955 ms 3 34436 6.72 MB/sec execute 55 sec latency 11.784 ms 3 35102 6.92 MB/sec execute 56 sec latency 13.979 ms 3 35614 7.02 MB/sec execute 57 sec latency 15.523 ms 3 35950 6.98 MB/sec execute 58 sec latency 13.040 ms 3 36284 6.89 MB/sec execute 59 sec latency 31.832 ms 3 36672 6.84 MB/sec execute 60 sec latency 14.894 ms 3 37153 6.90 MB/sec execute 61 sec latency 12.027 ms 3 37619 6.82 MB/sec execute 62 sec latency 17.393 ms 3 38140 6.89 MB/sec execute 63 sec latency 12.674 ms 3 38831 7.06 MB/sec execute 64 sec latency 10.479 ms 3 39295 7.09 MB/sec execute 65 sec latency 15.487 ms 3 39642 7.01 MB/sec execute 66 sec latency 29.956 ms 3 40012 6.97 MB/sec execute 67 sec latency 22.167 ms 3 40438 6.98 MB/sec execute 68 sec latency 16.357 ms 3 40872 6.94 MB/sec execute 69 sec latency 13.948 ms 3 41310 6.89 MB/sec execute 70 sec latency 18.997 ms 3 41856 6.97 MB/sec execute 71 sec latency 16.140 ms 3 42489 7.09 MB/sec execute 72 sec latency 16.950 ms 3 42920 7.10 MB/sec execute 73 sec latency 10.910 ms 3 43268 7.04 MB/sec execute 74 sec latency 38.039 ms 3 43640 6.99 MB/sec execute 75 sec latency 23.409 ms 3 44090 7.01 MB/sec execute 76 sec latency 15.979 ms 3 44574 6.97 MB/sec execute 77 sec latency 14.640 ms 3 45013 6.94 MB/sec execute 78 sec latency 15.360 ms 3 45669 7.05 MB/sec execute 79 sec latency 14.993 ms 3 46233 7.15 MB/sec execute 80 sec latency 12.291 ms 3 46644 7.12 MB/sec execute 81 sec latency 10.858 ms 3 46990 7.06 MB/sec execute 82 sec latency 25.867 ms 3 47452 7.09 MB/sec execute 83 sec latency 21.347 ms 3 47904 7.06 MB/sec execute 84 sec latency 12.575 ms 3 48370 7.02 MB/sec execute 85 sec latency 11.502 ms 3 48935 7.08 MB/sec execute 86 sec latency 11.317 ms 3 49560 7.18 MB/sec execute 87 sec latency 10.929 ms 3 49973 7.19 MB/sec execute 88 sec latency 12.596 ms 3 50312 7.14 MB/sec execute 89 sec latency 39.541 ms 3 50642 7.10 MB/sec execute 90 sec latency 16.542 ms 3 51053 7.10 MB/sec execute 91 sec latency 15.848 ms 3 51509 7.07 MB/sec execute 92 sec latency 12.455 ms 3 51940 7.04 MB/sec execute 93 sec latency 12.123 ms 3 52602 7.11 MB/sec execute 94 sec latency 11.199 ms 3 53112 7.18 MB/sec execute 95 sec latency 14.467 ms 3 53515 7.19 MB/sec execute 96 sec latency 16.665 ms 3 53855 7.14 MB/sec execute 97 sec latency 41.482 ms 3 54213 7.10 MB/sec execute 98 sec latency 18.468 ms 3 54623 7.10 MB/sec execute 99 sec latency 17.820 ms 3 55053 7.08 MB/sec execute 100 sec latency 13.027 ms 3 55475 7.05 MB/sec execute 101 sec latency 13.621 ms 3 56117 7.11 MB/sec execute 102 sec latency 12.124 ms 3 56602 7.17 MB/sec execute 103 sec latency 13.178 ms 3 56996 7.16 MB/sec execute 104 sec latency 21.215 ms 3 57366 7.14 MB/sec execute 105 sec latency 24.444 ms 3 57680 7.11 MB/sec execute 106 sec latency 14.877 ms 3 58115 7.11 MB/sec execute 107 sec latency 15.769 ms 3 58521 7.06 MB/sec execute 108 sec latency 17.906 ms 3 58984 7.05 MB/sec execute 109 sec latency 14.350 ms 3 59526 7.09 MB/sec execute 110 sec latency 13.926 ms 3 60089 7.14 MB/sec execute 111 sec latency 13.859 ms 3 60530 7.16 MB/sec execute 112 sec latency 17.538 ms 3 60902 7.14 MB/sec execute 113 sec latency 27.355 ms 3 61228 7.11 MB/sec execute 114 sec latency 19.252 ms 3 61660 7.11 MB/sec execute 115 sec latency 26.872 ms 3 62065 7.07 MB/sec execute 116 sec latency 14.838 ms 3 62535 7.06 MB/sec execute 117 sec latency 13.091 ms 3 63170 7.11 MB/sec execute 118 sec latency 12.711 ms 3 63679 7.15 MB/sec execute 119 sec latency 12.099 ms 3 cleanup 120 sec 0 cleanup 121 sec Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 27799 6.463 41.472 Close 20343 1.215 8.566 Rename 1169 8.949 23.179 Unlink 5639 3.785 21.738 Qpathinfo 25123 1.634 17.352 Qfileinfo 4363 0.407 5.059 Qfsinfo 4593 4.789 27.744 Sfileinfo 2250 4.888 31.779 Find 9679 0.720 15.815 WriteX 13661 1.781 17.199 ReadX 43232 0.063 2.592 LockX 90 1.177 2.884 UnlockX 90 1.219 2.804 Flush 1935 6.545 38.857 Throughput 7.15462 MB/sec 3 clients 3 procs max_latency=41.482 ms stopping dbench on /mnt/lustre/d8.sanity-quota at Thu Apr 18 05:05:58 EDT 2024 with return code 0 clean dbench files on /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota removed directory: 'clients/client1/~dmtmp/ACCESS' removed directory: 'clients/client1/~dmtmp/PWRPNT' removed directory: 'clients/client1/~dmtmp/EXCEL' removed directory: 'clients/client1/~dmtmp/PM' removed directory: 'clients/client1/~dmtmp/PARADOX' removed directory: 'clients/client1/~dmtmp/WORDPRO' removed directory: 'clients/client1/~dmtmp/COREL' removed directory: 'clients/client1/~dmtmp/SEED' removed directory: 'clients/client1/~dmtmp/WORD' removed directory: 'clients/client1/~dmtmp' removed directory: 'clients/client1' removed directory: 'clients/client0/~dmtmp/ACCESS' removed directory: 'clients/client0/~dmtmp/PWRPNT' removed directory: 'clients/client0/~dmtmp/EXCEL' removed directory: 'clients/client0/~dmtmp/PM' removed directory: 'clients/client0/~dmtmp/PARADOX' removed directory: 'clients/client0/~dmtmp/WORDPRO' removed directory: 'clients/client0/~dmtmp/COREL' removed directory: 'clients/client0/~dmtmp/SEED' removed directory: 'clients/client0/~dmtmp/WORD' removed directory: 'clients/client0/~dmtmp' removed directory: 'clients/client0' removed directory: 'clients/client2/~dmtmp/ACCESS' removed directory: 'clients/client2/~dmtmp/PWRPNT' removed directory: 'clients/client2/~dmtmp/EXCEL' removed directory: 'clients/client2/~dmtmp/PM' removed directory: 'clients/client2/~dmtmp/PARADOX' removed directory: 'clients/client2/~dmtmp/WORDPRO' removed directory: 'clients/client2/~dmtmp/COREL' removed directory: 'clients/client2/~dmtmp/SEED' removed directory: 'clients/client2/~dmtmp/WORD' removed directory: 'clients/client2/~dmtmp' removed directory: 'clients/client2' removed directory: 'clients' removed 'client.txt' /mnt/lustre/d8.sanity-quota dbench successfully finished lfs project -C /mnt/lustre/d8.sanity-quota Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 8 (160s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 9: Block limit larger than 4GB (b10707) ========================================================== 05:06:10 (1713431170) OST0_SIZE: 3598532 required: 4900000 WARN: OST0 has less than 4900000 free, skip this test. PASS 9 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 10: Test quota for root user ======== 05:06:14 (1713431174) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted Waiting 90s for 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 2048 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d10.sanity-quota/f10.sanity-quota] [count=3] [oflag=sync] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.192552 s, 16.3 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 10 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 11: Chown/chgrp ignores quota ======= 05:06:31 (1713431191) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' lfs setquota: warning: inode hardlimit '1' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 2* 0 1 - lustre-MDT0000_UUID 0 - 0 - 2* - 2 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 2, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 11 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12a: Block quota rebalancing ======== 05:06:48 (1713431208) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write to ost0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-0] [count=17] [oflag=sync] 17+0 records in 17+0 records out 17825792 bytes (18 MB) copied, 1.14719 s, 15.5 MB/s Write to ost1... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1] [count=17] [oflag=sync] dd: error writing '/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.292827 s, 14.3 MB/s Free space from ost0... Waiting for MDT destroys to complete Write to ost1 after space freed from ost0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d12a.sanity-quota/f12a.sanity-quota-1] [count=17] [oflag=sync] 17+0 records in 17+0 records out 17825792 bytes (18 MB) copied, 1.04521 s, 17.1 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 12a (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12b: Inode quota rebalancing ======== 05:07:18 (1713431238) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 2s: want 'u' got 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Create 2048 files on mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota/f12b.sanity-quota] [2048] total: 2048 create in 4.24 seconds: 483.53 ops/second Create files on mdt1... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1] mknod(/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second Free space from mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d12b.sanity-quota/f12b.sanity-quota] [2048] - unlinked 0 (time 1713431248 ; total 0 ; last 0) total: 2048 unlinks in 9 seconds: 227.555557 unlinks/second Waiting for MDT destroys to complete Create files on mdt1 after space freed from mdt0... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1024] total: 1024 create in 2.15 seconds: 477.03 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [unlinkmany] [/mnt/lustre/d12b.sanity-quota-1/f12b.sanity-quota] [1024] - unlinked 0 (time 1713431261 ; total 1 ; last 1) total: 1024 unlinks in 5 seconds: 204.800003 unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 12b (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 13: Cancel per-ID lock in the LRU list ========================================================== 05:07:50 (1713431270) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d13.sanity-quota/f13.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0683002 s, 15.4 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 13 (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 14: check panic in qmt_site_recalc_cb ========================================================== 05:08:11 (1713431291) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d14.sanity-quota/f14.sanity-quota-0] [count=10] [oflag=direct] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.211323 s, 49.6 MB/s Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg432-server Removing lustre-OST0000_UUID from qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 14 (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 15: Set over 4T block quota ========= 05:08:41 (1713431321) Waiting for MDT destroys to complete PASS 15 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16a: lfs quota should skip the inactive MDT/OST ========================================================== 05:08:48 (1713431328) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d16a.sanity-quota/f16a.sanity-quota] [count=50] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.32557 s, 39.6 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 49152 0 512000 - 1 0 10240 - Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 1024 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 1024, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 1024 - lustre-MDT0001_UUID[inact] [0] - [0] - [0] - [0] - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 1024, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 1 0 10240 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID[inact] [0] - [0] - [0] - [0] - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 16a (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16b: lfs quota should skip the nonexistent MDT/OST ========================================================== 05:09:00 (1713431340) SKIP: sanity-quota test_16b needs >= 3 MDTs SKIP 16b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 17: DQACQ return recoverable error == 05:09:02 (1713431342) DQACQ return -ENOLCK Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=37 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.05215 s, 344 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -EAGAIN Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=11 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.04056 s, 345 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -ETIMEDOUT Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=110 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 2.37521 s, 441 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete DQACQ return -ENOTCONN Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=107 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.04599 s, 344 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 17 (92s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 18: MDS failover while writing, no watchdog triggered (b14840) ========================================================== 05:10:35 (1713431435) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 3s: want 'u' got 'u' User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (buffered) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2884 1284804 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1904 1285784 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1604 3601236 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3128 7206732 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg432-server Stopping /mnt/lustre-mds1 (opts:) on oleg432-server 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 2.60689 s, 40.2 MB/s 05:10:45 (1713431445) shut down Failover mds1 to oleg432-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 05:10:59 (1713431459) targets are mounted 05:10:59 (1713431459) facet_failover done oleg432-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec (dd_pid=18552, time=0, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102400 0 204800 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 102400 - 114688 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 114688 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (directio) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] [oflag=direct] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2456 1285232 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1904 1285784 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1604 3593888 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3128 7199384 1% /mnt/lustre Fail mds for 40 seconds 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 2.08451 s, 50.3 MB/s Failing mds1 on oleg432-server Stopping /mnt/lustre-mds1 (opts:) on oleg432-server 05:11:16 (1713431476) shut down Failover mds1 to oleg432-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 05:11:30 (1713431490) targets are mounted 05:11:30 (1713431490) facet_failover done oleg432-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec (dd_pid=20833, time=0, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102400 0 204800 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 102400 - 109568 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 109568 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 18 (70s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 19: Updating admin limits doesn't zero operational limits(b14790) ========================================================== 05:11:47 (1713431507) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 3s: want 'ugp' got 'ugp' Set user quota (limit: 5M) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Update quota limits Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 6+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.233294 s, 22.5 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5120* 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 5120* - 5120 - - - - - Total allocated inode limit: 0, total allocated block limit: 5120 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] [seek=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0358093 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5120* 0 5120 - 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 5120* - 5120 - - - - - Total allocated inode limit: 0, total allocated block limit: 5120 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 19 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 20: Test if setquota specifiers work properly (b15754) ========================================================== 05:12:03 (1713431523) PASS 20 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 21: Setquota while writing & deleting (b16053) ========================================================== 05:12:10 (1713431530) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set limit(block:10G; file:1000000) for user: quota_usr Set limit(block:10G; file:1000000) for group: quota_usr lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set limit(block:10G; file:) for project: 1000 lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set quota for 1 times Set quota for 2 times Set quota for 3 times Set quota for 4 times Set quota for 5 times Set quota for 6 times Set quota for 7 times Set quota for 8 times Set quota for 9 times Set quota for 10 times Set quota for 11 times Set quota for 12 times Set quota for 13 times Set quota for 14 times Set quota for 15 times Set quota for 16 times Set quota for 17 times Set quota for 18 times Set quota for 19 times Set quota for 20 times Set quota for 21 times (dd_pid=27390, time=0)successful (dd_pid=27391, time=3)successful Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 21 (46s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 22: enable/disable quota by 'lctl conf_param/set_param -P' ========================================================== 05:12:57 (1713431577) Set both mdt & ost quota type as ug Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Restart... Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:) Stopping client oleg432-client.virtnet /mnt/lustre opts: Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13045) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg432-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42099/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88009f872000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88009f872000.idle_timeout=debug Verify if quota is enabled Set both mdt & ost quota type as none Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Restart... Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:) Stopping client oleg432-client.virtnet /mnt/lustre opts: Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13045) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg432-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42099/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a6a72800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a6a72800.idle_timeout=debug Verify if quota is disabled PASS 22 (128s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 23: Quota should be honored with directIO (b16125) ========================================================== 05:15:06 (1713431706) OST0_SIZE: 3605408 required: 6144 run for 4MB test file Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' User quota (limit: 4 MB) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 4096 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Step1: trigger EDQUOT with O_DIRECT Write half of file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0504491 s, 41.6 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=3] [seek=2] [oflag=direct] [conv=notrunc] dd: error writing '/mnt/lustre/d23.sanity-quota/f23.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0442141 s, 23.7 MB/s Step1: done Step2: rewrite should succeed running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=1] [oflag=direct] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0258732 s, 40.5 MB/s Step2: done Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 OST0_SIZE: 3605408 required: 61440 run for 40MB test file Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 40 MB) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 40960 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Step1: trigger EDQUOT with O_DIRECT Write half of file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.434034 s, 48.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=21] [seek=20] [oflag=direct] [conv=notrunc] dd: error writing '/mnt/lustre/d23.sanity-quota/f23.sanity-quota': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.398685 s, 50.0 MB/s Step1: done Step2: rewrite should succeed running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d23.sanity-quota/f23.sanity-quota] [count=1] [oflag=direct] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0249779 s, 42.0 MB/s Step2: done Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 23 (41s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 24: lfs draws an asterix when limit is reached (b16646) ========================================================== 05:15:49 (1713431749) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set user quota (limit: 5M) running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d24.sanity-quota/f24.sanity-quota] [count=6] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.138651 s, 45.4 MB/s /mnt/lustre 6144* 0 5120 - 1 0 0 - 6144* - 6144 - - - - - Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 24 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 25: check indexes versions ========== 05:16:05 (1713431765) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.165242 s, 31.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.128987 s, 40.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0427239 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 25 (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27a: lfs quota/setquota should handle wrong arguments (b19612) ========================================================== 05:16:40 (1713431800) lfs quota: name and mount point must be specified Display disk usage and limits. usage: quota [-q] [-v] [-h] [-o OBD_UUID|-i MDT_IDX|-I OST_IDX] [{-u|-g|-p} UNAME|UID|GNAME|GID|PROJID] [--pool ] quota -t <-u|-g|-p> [--pool ] quota [-q] [-v] [h] {-U|-G|-P} [--pool ] quota -a {-u|-g|-p} [-s start_qid] [-e end_qid] lfs setquota: either -u or -g must be specified setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 27a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27b: lfs quota/setquota should handle user/group/project ID (b20200) ========================================================== 05:16:43 (1713431803) lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr 60000 (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp 60000 (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 PASS 27b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27c: lfs quota should support human-readable output ========================================================== 05:16:48 (1713431808) PASS 27c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27d: lfs setquota should support fraction block limit ========================================================== 05:16:52 (1713431812) PASS 27d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 30: Hard limit updates should not reset grace times ========================================================== 05:16:56 (1713431816) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.218649 s, 38.4 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8192* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 8192 - 9264 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9264 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0691302 s, 15.2 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9216* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 0 - 0 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9216 - 9264 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9264 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.036532 s, 0.0 kB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 30 (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 33: Basic usage tracking for user & group & project ========================================================== 05:17:19 (1713431839) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write files... lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-0 Iteration 0/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-1 Iteration 1/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-2 Iteration 2/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-3 Iteration 3/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-4 Iteration 4/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-5 Iteration 5/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-6 Iteration 6/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-7 Iteration 7/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-8 Iteration 8/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-9 Iteration 9/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-10 Iteration 10/10 completed Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage after write Verify inode usage after write Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Verify disk usage after delete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 33 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 34: Usage transfer for user & group & project ========================================================== 05:17:52 (1713431872) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... chown the file to user 60000 Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for user 60000 chgrp the file to group 60000 Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for group 60000 chown the file to user 60001 Wait for setattr on objects finished... Waiting for MDT destroys to complete change_project project id to 1000 lfs project -p 1000 /mnt/lustre/d34.sanity-quota/f34.sanity-quota Wait for setattr on objects finished... Waiting for MDT destroys to complete Verify disk usage for user 60001/60000 and group 60000 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 34 (54s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 35: Usage is still accessible across reboot ========================================================== 05:18:48 (1713431928) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... lfs project -p 1000 /mnt/lustre/d35.sanity-quota/f35.sanity-quota Wait for setattr on objects finished... Waiting for MDT destroys to complete Save disk usage before restart User 60000: 2048KB 1 inodes Group 60000: 2048KB 1 inodes Project 1000: 2048KB 1 inodes Restart... Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:) Stopping client oleg432-client.virtnet /mnt/lustre opts: Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a8da4800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a8da4800.idle_timeout=debug affected facets: Verify disk usage after restart Append to the same file... Verify space usage is increased Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 35 (86s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 37: Quota accounted properly for file created by 'lfs setstripe' ========================================================== 05:20:16 (1713432016) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0618419 s, 17.0 MB/s Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 37 (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 38: Quota accounting iterator doesn't skip id entries ========================================================== 05:20:38 (1713432038) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Create 10000 files... Found 10000 id entries Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 38 (292s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 39: Project ID interface works correctly ========================================================== 05:25:31 (1713432331) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1024 /mnt/lustre/d39.sanity-quota/project Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:) Stopping client oleg432-client.virtnet /mnt/lustre opts: Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13045) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg432-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42099/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012d442000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012d442000.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 39 (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40a: Hard link across different project ID ========================================================== 05:26:42 (1713432402) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40a.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40a.sanity-quota/dir2 ln: failed to create hard link '/mnt/lustre/d40a.sanity-quota/dir2/1_link' => '/mnt/lustre/d40a.sanity-quota/dir1/1': Invalid cross-device link Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40a (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40b: Mv across different project ID ========================================================== 05:26:54 (1713432414) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40b.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40b.sanity-quota/dir2 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40b (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40c: Remote child Dir inherit project quota properly ========================================================== 05:27:07 (1713432427) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40c.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40c (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40d: Stripe Directory inherit project quota properly ========================================================== 05:27:24 (1713432444) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d40d.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 40d (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 41: df should return projid-specific values ========================================================== 05:27:40 (1713432460) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' striped dir -i1 -c2 -H all_char /mnt/lustre/d41.sanity-quota/dir lfs project -sp 41000 /mnt/lustre/d41.sanity-quota/dir == global statfs: /mnt/lustre == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.204.132@tcp:/lustre 7666232 4836 7209204 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.204.132@tcp:/lustre 523966 598 523368 1% /mnt/lustre Disk quotas for prj 41000 (pid 41000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/d41.sanity-quota/dir 12 0 102400 - 3 0 4096 - == project statfs (prjid=41000): /mnt/lustre/d41.sanity-quota/dir == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.204.132@tcp:/lustre 102400 12 102388 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.204.132@tcp:/lustre 4096 3 4093 1% /mnt/lustre llite.lustre-ffff88012d442000.statfs_project=0 llite.lustre-ffff88012d442000.statfs_project=1 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 41 (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 48: lfs quota --delete should delete quota project ID ========================================================== 05:28:02 (1713432482) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0244495 s, 42.9 MB/s - id: 60000 osd-ldiskfs - id: 60000 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0288132 s, 36.4 MB/s - id: 60000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_user: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0254797 s, 41.2 MB/s - id: 60000 osd-ldiskfs - id: 60000 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0271433 s, 38.6 MB/s - id: 60000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_group: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0264115 s, 39.7 MB/s - id: 10000 osd-ldiskfs - id: 10000 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0277155 s, 37.8 MB/s - id: 10000 cat: /proc/fs/lustre/osd-ldiskfs/lustre-OST0000/quota_slave/limit_project: No such file or directory - id: 10000 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 48 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 49: lfs quota -a prints the quota usage for all quota IDs ========================================================== 05:28:38 (1713432518) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 setquota for users and groups fail_loc=0xa09 lfs setquota: 1000 / 27 seconds fail_loc=0 903 0 0 102400 - 0 0 10240 - 904 0 0 102400 - 0 0 10240 - 905 0 0 102400 - 0 0 10240 - 906 0 0 102400 - 0 0 10240 - 907 0 0 102400 - 0 0 10240 - 908 0 0 102400 - 0 0 10240 - 909 0 0 102400 - 0 0 10240 - 910 0 0 102400 - 0 0 10240 - 911 0 0 102400 - 0 0 10240 - 912 0 0 102400 - 0 0 10240 - 913 0 0 102400 - 0 0 10240 - 914 0 0 102400 - 0 0 10240 - 915 0 0 102400 - 0 0 10240 - 916 0 0 102400 - 0 0 10240 - 917 0 0 102400 - 0 0 10240 - 918 0 0 102400 - 0 0 10240 - 919 0 0 102400 - 0 0 10240 - 920 0 0 102400 - 0 0 10240 - 921 0 0 102400 - 0 0 10240 - 922 0 0 102400 - 0 0 10240 - 923 0 0 102400 - 0 0 10240 - 924 0 0 102400 - 0 0 10240 - 925 0 0 102400 - 0 0 10240 - 926 0 0 102400 - 0 0 10240 - 927 0 0 102400 - 0 0 10240 - 928 0 0 102400 - 0 0 10240 - 929 0 0 102400 - 0 0 10240 - 930 0 0 102400 - 0 0 10240 - 931 0 0 102400 - 0 0 10240 - 932 0 0 102400 - 0 0 10240 - 933 0 0 102400 - 0 0 10240 - 934 0 0 102400 - 0 0 10240 - 935 0 0 102400 - 0 0 10240 - 936 0 0 102400 - 0 0 10240 - 937 0 0 102400 - 0 0 10240 - 938 0 0 102400 - 0 0 10240 - 939 0 0 102400 - 0 0 10240 - 940 0 0 102400 - 0 0 10240 - 941 0 0 102400 - 0 0 10240 - 942 0 0 102400 - 0 0 10240 - 943 0 0 102400 - 0 0 10240 - 944 0 0 102400 - 0 0 10240 - 945 0 0 102400 - 0 0 10240 - 946 0 0 102400 - 0 0 10240 - 947 0 0 102400 - 0 0 10240 - 948 0 0 102400 - 0 0 10240 - 949 0 0 102400 - 0 0 10240 - 950 0 0 102400 - 0 0 10240 - 951 0 0 102400 - 0 0 10240 - 952 0 0 102400 - 0 0 10240 - 953 0 0 102400 - 0 0 10240 - 954 0 0 102400 - 0 0 10240 - 955 0 0 102400 - 0 0 10240 - 956 0 0 102400 - 0 0 10240 - 957 0 0 102400 - 0 0 10240 - 958 0 0 102400 - 0 0 10240 - 959 0 0 102400 - 0 0 10240 - 960 0 0 102400 - 0 0 10240 - 961 0 0 102400 - 0 0 10240 - 962 0 0 102400 - 0 0 10240 - 963 0 0 102400 - 0 0 10240 - 964 0 0 102400 - 0 0 10240 - 965 0 0 102400 - 0 0 10240 - 966 0 0 102400 - 0 0 10240 - 967 0 0 102400 - 0 0 10240 - 968 0 0 102400 - 0 0 10240 - 969 0 0 102400 - 0 0 10240 - 970 0 0 102400 - 0 0 10240 - 971 0 0 102400 - 0 0 10240 - 972 0 0 102400 - 0 0 10240 - 973 0 0 102400 - 0 0 10240 - 974 0 0 102400 - 0 0 10240 - 975 0 0 102400 - 0 0 10240 - 976 0 0 102400 - 0 0 10240 - 977 0 0 102400 - 0 0 10240 - 978 0 0 102400 - 0 0 10240 - 979 0 0 102400 - 0 0 10240 - 980 0 0 102400 - 0 0 10240 - 981 0 0 102400 - 0 0 10240 - 982 0 0 102400 - 0 0 10240 - 983 0 0 102400 - 0 0 10240 - 984 0 0 102400 - 0 0 10240 - 985 0 0 102400 - 0 0 10240 - 986 0 0 102400 - 0 0 10240 - 987 0 0 102400 - 0 0 10240 - 988 0 0 102400 - 0 0 10240 - 989 0 0 102400 - 0 0 10240 - 990 0 0 102400 - 0 0 10240 - 991 0 0 102400 - 0 0 10240 - 992 0 0 102400 - 0 0 10240 - 993 0 0 102400 - 0 0 10240 - 994 0 0 102400 - 0 0 10240 - 995 0 0 102400 - 0 0 10240 - 996 0 0 102400 - 0 0 10240 - 997 0 0 102400 - 0 0 10240 - 998 0 0 102400 - 0 0 10240 - polkitd 0 0 102400 - 0 0 10240 - green 0 0 102400 - 0 0 10240 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all usr quota: 1000 / 0 seconds 903 0 0 204800 - 0 0 20480 - 904 0 0 204800 - 0 0 20480 - 905 0 0 204800 - 0 0 20480 - 906 0 0 204800 - 0 0 20480 - 907 0 0 204800 - 0 0 20480 - 908 0 0 204800 - 0 0 20480 - 909 0 0 204800 - 0 0 20480 - 910 0 0 204800 - 0 0 20480 - 911 0 0 204800 - 0 0 20480 - 912 0 0 204800 - 0 0 20480 - 913 0 0 204800 - 0 0 20480 - 914 0 0 204800 - 0 0 20480 - 915 0 0 204800 - 0 0 20480 - 916 0 0 204800 - 0 0 20480 - 917 0 0 204800 - 0 0 20480 - 918 0 0 204800 - 0 0 20480 - 919 0 0 204800 - 0 0 20480 - 920 0 0 204800 - 0 0 20480 - 921 0 0 204800 - 0 0 20480 - 922 0 0 204800 - 0 0 20480 - 923 0 0 204800 - 0 0 20480 - 924 0 0 204800 - 0 0 20480 - 925 0 0 204800 - 0 0 20480 - 926 0 0 204800 - 0 0 20480 - 927 0 0 204800 - 0 0 20480 - 928 0 0 204800 - 0 0 20480 - 929 0 0 204800 - 0 0 20480 - 930 0 0 204800 - 0 0 20480 - 931 0 0 204800 - 0 0 20480 - 932 0 0 204800 - 0 0 20480 - 933 0 0 204800 - 0 0 20480 - 934 0 0 204800 - 0 0 20480 - 935 0 0 204800 - 0 0 20480 - 936 0 0 204800 - 0 0 20480 - 937 0 0 204800 - 0 0 20480 - 938 0 0 204800 - 0 0 20480 - 939 0 0 204800 - 0 0 20480 - 940 0 0 204800 - 0 0 20480 - 941 0 0 204800 - 0 0 20480 - 942 0 0 204800 - 0 0 20480 - 943 0 0 204800 - 0 0 20480 - 944 0 0 204800 - 0 0 20480 - 945 0 0 204800 - 0 0 20480 - 946 0 0 204800 - 0 0 20480 - 947 0 0 204800 - 0 0 20480 - 948 0 0 204800 - 0 0 20480 - 949 0 0 204800 - 0 0 20480 - 950 0 0 204800 - 0 0 20480 - 951 0 0 204800 - 0 0 20480 - 952 0 0 204800 - 0 0 20480 - 953 0 0 204800 - 0 0 20480 - 954 0 0 204800 - 0 0 20480 - 955 0 0 204800 - 0 0 20480 - 956 0 0 204800 - 0 0 20480 - 957 0 0 204800 - 0 0 20480 - 958 0 0 204800 - 0 0 20480 - 959 0 0 204800 - 0 0 20480 - 960 0 0 204800 - 0 0 20480 - 961 0 0 204800 - 0 0 20480 - 962 0 0 204800 - 0 0 20480 - 963 0 0 204800 - 0 0 20480 - 964 0 0 204800 - 0 0 20480 - 965 0 0 204800 - 0 0 20480 - 966 0 0 204800 - 0 0 20480 - 967 0 0 204800 - 0 0 20480 - 968 0 0 204800 - 0 0 20480 - 969 0 0 204800 - 0 0 20480 - 970 0 0 204800 - 0 0 20480 - 971 0 0 204800 - 0 0 20480 - 972 0 0 204800 - 0 0 20480 - 973 0 0 204800 - 0 0 20480 - 974 0 0 204800 - 0 0 20480 - 975 0 0 204800 - 0 0 20480 - 976 0 0 204800 - 0 0 20480 - 977 0 0 204800 - 0 0 20480 - 978 0 0 204800 - 0 0 20480 - 979 0 0 204800 - 0 0 20480 - 980 0 0 204800 - 0 0 20480 - 981 0 0 204800 - 0 0 20480 - 982 0 0 204800 - 0 0 20480 - 983 0 0 204800 - 0 0 20480 - 984 0 0 204800 - 0 0 20480 - 985 0 0 204800 - 0 0 20480 - 986 0 0 204800 - 0 0 20480 - 987 0 0 204800 - 0 0 20480 - 988 0 0 204800 - 0 0 20480 - 989 0 0 204800 - 0 0 20480 - 990 0 0 204800 - 0 0 20480 - 991 0 0 204800 - 0 0 20480 - 992 0 0 204800 - 0 0 20480 - 993 0 0 204800 - 0 0 20480 - 994 0 0 204800 - 0 0 20480 - systemd-network 0 0 204800 - 0 0 20480 - systemd-bus-proxy 0 0 204800 - 0 0 20480 - input 0 0 204800 - 0 0 20480 - polkitd 0 0 204800 - 0 0 20480 - ssh_keys 0 0 204800 - 0 0 20480 - green 0 0 204800 - 0 0 20480 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all grp quota: 1000 / 0 seconds Create 991 files... total: 991 open/close in 7.60 seconds: 130.32 ops/second 951 4 0 102400 - 1 0 10240 - 952 4 0 102400 - 1 0 10240 - 953 4 0 102400 - 1 0 10240 - 954 4 0 102400 - 1 0 10240 - 955 4 0 102400 - 1 0 10240 - 956 4 0 102400 - 1 0 10240 - 957 4 0 102400 - 1 0 10240 - 958 4 0 102400 - 1 0 10240 - 959 4 0 102400 - 1 0 10240 - 960 4 0 102400 - 1 0 10240 - 961 4 0 102400 - 1 0 10240 - 962 4 0 102400 - 1 0 10240 - 963 4 0 102400 - 1 0 10240 - 964 4 0 102400 - 1 0 10240 - 965 4 0 102400 - 1 0 10240 - 966 4 0 102400 - 1 0 10240 - 967 4 0 102400 - 1 0 10240 - 968 4 0 102400 - 1 0 10240 - 969 4 0 102400 - 1 0 10240 - 970 4 0 102400 - 1 0 10240 - 971 4 0 102400 - 1 0 10240 - 972 4 0 102400 - 1 0 10240 - 973 4 0 102400 - 1 0 10240 - 974 4 0 102400 - 1 0 10240 - 975 4 0 102400 - 1 0 10240 - 976 4 0 102400 - 1 0 10240 - 977 4 0 102400 - 1 0 10240 - 978 4 0 102400 - 1 0 10240 - 979 4 0 102400 - 1 0 10240 - 980 4 0 102400 - 1 0 10240 - 981 4 0 102400 - 1 0 10240 - 982 4 0 102400 - 1 0 10240 - 983 4 0 102400 - 1 0 10240 - 984 4 0 102400 - 1 0 10240 - 985 4 0 102400 - 1 0 10240 - 986 4 0 102400 - 1 0 10240 - 987 4 0 102400 - 1 0 10240 - 988 4 0 102400 - 1 0 10240 - 989 4 0 102400 - 1 0 10240 - 990 4 0 102400 - 1 0 10240 - 991 4 0 102400 - 1 0 10240 - 992 4 0 102400 - 1 0 10240 - 993 4 0 102400 - 1 0 10240 - 994 4 0 102400 - 1 0 10240 - 995 4 0 102400 - 1 0 10240 - 996 4 0 102400 - 1 0 10240 - 997 4 0 102400 - 1 0 10240 - 998 4 0 102400 - 1 0 10240 - polkitd 4 0 102400 - 1 0 10240 - green 4 0 102400 - 1 0 10240 - time=0, rate=991/0 951 4 0 204800 - 1 0 20480 - 952 4 0 204800 - 1 0 20480 - 953 4 0 204800 - 1 0 20480 - 954 4 0 204800 - 1 0 20480 - 955 4 0 204800 - 1 0 20480 - 956 4 0 204800 - 1 0 20480 - 957 4 0 204800 - 1 0 20480 - 958 4 0 204800 - 1 0 20480 - 959 4 0 204800 - 1 0 20480 - 960 4 0 204800 - 1 0 20480 - 961 4 0 204800 - 1 0 20480 - 962 4 0 204800 - 1 0 20480 - 963 4 0 204800 - 1 0 20480 - 964 4 0 204800 - 1 0 20480 - 965 4 0 204800 - 1 0 20480 - 966 4 0 204800 - 1 0 20480 - 967 4 0 204800 - 1 0 20480 - 968 4 0 204800 - 1 0 20480 - 969 4 0 204800 - 1 0 20480 - 970 4 0 204800 - 1 0 20480 - 971 4 0 204800 - 1 0 20480 - 972 4 0 204800 - 1 0 20480 - 973 4 0 204800 - 1 0 20480 - 974 4 0 204800 - 1 0 20480 - 975 4 0 204800 - 1 0 20480 - 976 4 0 204800 - 1 0 20480 - 977 4 0 204800 - 1 0 20480 - 978 4 0 204800 - 1 0 20480 - 979 4 0 204800 - 1 0 20480 - 980 4 0 204800 - 1 0 20480 - 981 4 0 204800 - 1 0 20480 - 982 4 0 204800 - 1 0 20480 - 983 4 0 204800 - 1 0 20480 - 984 4 0 204800 - 1 0 20480 - 985 4 0 204800 - 1 0 20480 - 986 4 0 204800 - 1 0 20480 - 987 4 0 204800 - 1 0 20480 - 988 4 0 204800 - 1 0 20480 - 989 4 0 204800 - 1 0 20480 - 990 4 0 204800 - 1 0 20480 - 991 4 0 204800 - 1 0 20480 - 992 4 0 204800 - 1 0 20480 - 993 4 0 204800 - 1 0 20480 - 994 4 0 204800 - 1 0 20480 - systemd-network 4 0 204800 - 1 0 20480 - systemd-bus-proxy 4 0 204800 - 1 0 20480 - input 4 0 204800 - 1 0 20480 - polkitd 4 0 204800 - 1 0 20480 - ssh_keys 4 0 204800 - 1 0 20480 - green 4 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713432570 ; total 0 ; last 0) total: 991 unlinks in 1 seconds: 991.000000 unlinks/second Create 991 files... total: 991 open/close in 8.34 seconds: 118.82 ops/second 951 4 0 102400 - 1 0 10240 - 952 4 0 102400 - 1 0 10240 - 953 4 0 102400 - 1 0 10240 - 954 4 0 102400 - 1 0 10240 - 955 4 0 102400 - 1 0 10240 - 956 4 0 102400 - 1 0 10240 - 957 4 0 102400 - 1 0 10240 - 958 4 0 102400 - 1 0 10240 - 959 4 0 102400 - 1 0 10240 - 960 4 0 102400 - 1 0 10240 - 961 4 0 102400 - 1 0 10240 - 962 4 0 102400 - 1 0 10240 - 963 4 0 102400 - 1 0 10240 - 964 4 0 102400 - 1 0 10240 - 965 4 0 102400 - 1 0 10240 - 966 4 0 102400 - 1 0 10240 - 967 4 0 102400 - 1 0 10240 - 968 4 0 102400 - 1 0 10240 - 969 4 0 102400 - 1 0 10240 - 970 4 0 102400 - 1 0 10240 - 971 4 0 102400 - 1 0 10240 - 972 4 0 102400 - 1 0 10240 - 973 4 0 102400 - 1 0 10240 - 974 4 0 102400 - 1 0 10240 - 975 4 0 102400 - 1 0 10240 - 976 4 0 102400 - 1 0 10240 - 977 4 0 102400 - 1 0 10240 - 978 4 0 102400 - 1 0 10240 - 979 4 0 102400 - 1 0 10240 - 980 4 0 102400 - 1 0 10240 - 981 4 0 102400 - 1 0 10240 - 982 4 0 102400 - 1 0 10240 - 983 4 0 102400 - 1 0 10240 - 984 4 0 102400 - 1 0 10240 - 985 4 0 102400 - 1 0 10240 - 986 4 0 102400 - 1 0 10240 - 987 4 0 102400 - 1 0 10240 - 988 4 0 102400 - 1 0 10240 - 989 4 0 102400 - 1 0 10240 - 990 4 0 102400 - 1 0 10240 - 991 4 0 102400 - 1 0 10240 - 992 4 0 102400 - 1 0 10240 - 993 4 0 102400 - 1 0 10240 - 994 4 0 102400 - 1 0 10240 - 995 4 0 102400 - 1 0 10240 - 996 4 0 102400 - 1 0 10240 - 997 4 0 102400 - 1 0 10240 - 998 4 0 102400 - 1 0 10240 - polkitd 4 0 102400 - 1 0 10240 - green 4 0 102400 - 1 0 10240 - time=0, rate=991/0 951 4 0 204800 - 1 0 20480 - 952 4 0 204800 - 1 0 20480 - 953 4 0 204800 - 1 0 20480 - 954 4 0 204800 - 1 0 20480 - 955 4 0 204800 - 1 0 20480 - 956 4 0 204800 - 1 0 20480 - 957 4 0 204800 - 1 0 20480 - 958 4 0 204800 - 1 0 20480 - 959 4 0 204800 - 1 0 20480 - 960 4 0 204800 - 1 0 20480 - 961 4 0 204800 - 1 0 20480 - 962 4 0 204800 - 1 0 20480 - 963 4 0 204800 - 1 0 20480 - 964 4 0 204800 - 1 0 20480 - 965 4 0 204800 - 1 0 20480 - 966 4 0 204800 - 1 0 20480 - 967 4 0 204800 - 1 0 20480 - 968 4 0 204800 - 1 0 20480 - 969 4 0 204800 - 1 0 20480 - 970 4 0 204800 - 1 0 20480 - 971 4 0 204800 - 1 0 20480 - 972 4 0 204800 - 1 0 20480 - 973 4 0 204800 - 1 0 20480 - 974 4 0 204800 - 1 0 20480 - 975 4 0 204800 - 1 0 20480 - 976 4 0 204800 - 1 0 20480 - 977 4 0 204800 - 1 0 20480 - 978 4 0 204800 - 1 0 20480 - 979 4 0 204800 - 1 0 20480 - 980 4 0 204800 - 1 0 20480 - 981 4 0 204800 - 1 0 20480 - 982 4 0 204800 - 1 0 20480 - 983 4 0 204800 - 1 0 20480 - 984 4 0 204800 - 1 0 20480 - 985 4 0 204800 - 1 0 20480 - 986 4 0 204800 - 1 0 20480 - 987 4 0 204800 - 1 0 20480 - 988 4 0 204800 - 1 0 20480 - 989 4 0 204800 - 1 0 20480 - 990 4 0 204800 - 1 0 20480 - 991 4 0 204800 - 1 0 20480 - 992 4 0 204800 - 1 0 20480 - 993 4 0 204800 - 1 0 20480 - 994 4 0 204800 - 1 0 20480 - systemd-network 4 0 204800 - 1 0 20480 - systemd-bus-proxy 4 0 204800 - 1 0 20480 - input 4 0 204800 - 1 0 20480 - polkitd 4 0 204800 - 1 0 20480 - ssh_keys 4 0 204800 - 1 0 20480 - green 4 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713432597 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second fail_loc=0xa08 fail_loc=0 Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg432-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server oleg432-server: oleg432-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6eae000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6eae000.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 49 (188s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 50: Test if lfs find --projid works ========================================================== 05:31:48 (1713432708) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d50.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d50.sanity-quota/dir2 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 50 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 51: Test project accounting with mv/cp ========================================================== 05:32:04 (1713432724) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d51.sanity-quota/dir 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0102207 s, 103 MB/s Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 51 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 52: Rename normal file across project ID ========================================================== 05:32:24 (1713432744) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.655979 s, 160 MB/s Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102404 0 0 - 2 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4 0 0 - 1 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting rename '/mnt/lustre/d52.sanity-quota/t52_dir1' returned -1: Invalid cross-device link rename directory return 255 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4 0 0 - 1 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102404 0 0 - 2 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 52 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 53: Project inherit attribute could be cleared ========================================================== 05:32:45 (1713432765) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -s /mnt/lustre/d53.sanity-quota/dir lfs project -C /mnt/lustre/d53.sanity-quota/dir Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 53 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 54: basic lfs project interface test ========================================================== 05:32:53 (1713432773) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d54.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d54.sanity-quota/f54.sanity-quota-0] [100] total: 100 create in 0.11 seconds: 937.27 ops/second lfs project -rCk /mnt/lustre/d54.sanity-quota lfs project -rC /mnt/lustre/d54.sanity-quota - unlinked 0 (time 1713432776 ; total 0 ; last 0) total: 100 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 54 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 55: Chgrp should be affected by group quota ========================================================== 05:33:01 (1713432781) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d55.sanity-quota/f55.sanity-quota] [bs=1024] [count=100000] 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 13.946 s, 7.3 MB/s Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 51200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] chgrp: changing group of '/mnt/lustre/d55.sanity-quota/f55.sanity-quota': Disk quota exceeded 0 Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 1 0 0 - lustre-MDT0000_UUID 0 - 114688 - 1 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 55 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 56: lfs quota -t should work well === 05:33:34 (1713432814) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 56 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 57: lfs project could tolerate errors ========================================================== 05:33:43 (1713432823) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 57 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 58: project ID should be kept for new mirrors created by FID ========================================================== 05:34:00 (1713432840) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] test by mirror created with normal file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.33734 s, 39.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 0.863186 s, 36.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Waiting for MDT destroys to complete test by mirror created with FID running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.59903 s, 32.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 0.955782 s, 32.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 58 (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 59: lfs project dosen't crash kernel with project disabled ========================================================== 05:34:55 (1713432895) Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:) Stopping client oleg432-client.virtnet /mnt/lustre opts: Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13045) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg432-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42099/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a6a74800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a6a74800.idle_timeout=debug Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs: failed to set xattr for '/mnt/lustre/d59.sanity-quota/f59.sanity-quota-0': Operation not supported Stopping clients: oleg432-client.virtnet /mnt/lustre (opts:) Stopping client oleg432-client.virtnet /mnt/lustre opts: Stopping clients: oleg432-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg432-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg432-server tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) tune2fs 1.46.2.wc5 (26-Mar-2022) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13045) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg432-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg432-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42099/testresults/sanity-quota-ldiskfs-DNE-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.32,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg432-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg432-server' oleg432-server: oleg432-server.virtnet: executing load_modules_local oleg432-server: Loading modules from /home/green/git/lustre-release/lustre oleg432-server: detected 4 online CPUs by sysfs oleg432-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Starting client oleg432-client.virtnet: -o user_xattr,flock oleg432-server@tcp:/lustre /mnt/lustre Started clients oleg432-client.virtnet: 192.168.204.132@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a6a75000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a6a75000.idle_timeout=debug Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 59 (137s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 60: Test quota for root with setgid ========================================================== 05:37:14 (1713433034) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' lfs setquota: warning: inode hardlimit '100' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 100 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d60.sanity-quota/f60.sanity-quota] [99] total: 99 create in 0.11 seconds: 904.54 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] touch: cannot touch '/mnt/lustre/d60.sanity-quota/foo': Disk quota exceeded running as uid/gid/euid/egid 0/0/0/0, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 60 (17s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_61 skipping SLOW test 61 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 62: Project inherit should be only changed by root ========================================================== 05:37:34 (1713433054) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [-p] [/mnt/lustre/d62.sanity-quota/] lfs project -s /mnt/lustre/d62.sanity-quota/ running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [chattr] [-P] [/mnt/lustre/d62.sanity-quota/] chattr: Operation not permitted while setting flags on /mnt/lustre/d62.sanity-quota/ Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 62 (6s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_63 skipping excluded test 63 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 64: lfs project on non dir/files should succeed ========================================================== 05:37:43 (1713433063) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 64 (17s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_65 skipping excluded test 65 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 66: nonroot user can not change project state in default ========================================================== 05:38:02 (1713433082) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 mdt.lustre-MDT0000.enable_chprojid_gid=0 mdt.lustre-MDT0001.enable_chprojid_gid=0 lfs project -sp 1000 /mnt/lustre/d66.sanity-quota/foo running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [0] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-C] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted lfs project -C /mnt/lustre/d66.sanity-quota/foo/foo mdt.lustre-MDT0000.enable_chprojid_gid=-1 mdt.lustre-MDT0001.enable_chprojid_gid=-1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-rC] [/mnt/lustre/d66.sanity-quota/foo/] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/bar] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/bar': Operation not permitted lfs project -p 1000 /mnt/lustre/d66.sanity-quota/foo/bar mdt.lustre-MDT0000.enable_chprojid_gid=0 mdt.lustre-MDT0001.enable_chprojid_gid=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 66 (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 67: quota pools recalculation ======= 05:38:16 (1713433096) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) granted 0x0 before write 0 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 osd-ldiskfs.lustre-OST0001.quota_slave.force_reint=1 affected facets: ost1 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg432-server: *.lustre-OST0000.recovery_status status: INACTIVE affected facets: ost2 oleg432-server: oleg432-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg432-server: *.lustre-OST0001.recovery_status status: INACTIVE file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-0 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-0 2 user 3 10 4 quota_usr Write... Thu Apr 18 05:38:25 EDT 2024 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0794885 s, 132 MB/s Thu Apr 18 05:38:25 EDT 2024 Thu Apr 18 05:38:25 EDT 2024 Thu Apr 18 05:38:26 EDT 2024 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 global granted 11264 qpool1 granted 0 Adding targets to pool oleg432-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Updated after 3s: want 'lustre-OST0000_UUID lustre-OST0001_UUID ' got 'lustre-OST0000_UUID lustre-OST0001_UUID ' Granted 11 MB file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-1 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-1 2 user 3 10 4 quota_2usr Write... Thu Apr 18 05:38:36 EDT 2024 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0652178 s, 161 MB/s Thu Apr 18 05:38:37 EDT 2024 Thu Apr 18 05:38:37 EDT 2024 Thu Apr 18 05:38:37 EDT 2024 granted_mb 10 file /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 0 /home/green/git/lustre-release/lustre/tests/sanity-quota.sh 1 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 2 user 3 10 4 quota_2usr Write... Thu Apr 18 05:38:39 EDT 2024 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d67.sanity-quota/f67.sanity-quota-2] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0803104 s, 131 MB/s Thu Apr 18 05:38:39 EDT 2024 Thu Apr 18 05:38:40 EDT 2024 Thu Apr 18 05:38:41 EDT 2024 /mnt/lustre/d67.sanity-quota/f67.sanity-quota-2 granted_mb 20 Removing lustre-OST0000_UUID from qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 67 (59s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 68: slave number in quota pool changed after each add/remove OST ========================================================== 05:39:17 (1713433157) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 nr result 4 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Adding targets to pool oleg432-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Removing lustre-OST0000_UUID from qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Removing lustre-OST0001_UUID from qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 68 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 69: EDQUOT at one of pools shouldn't affect DOM ========================================================== 05:39:52 (1713433192) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Creating new pool oleg432-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 User quota (block hardlimit:200 MB) User quota (block hardlimit:10 MB) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.43708 s, 215 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.79511 s, 188 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.120736 s, 86.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0210853 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 2.33617 s, 224 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 1.88871 s, 278 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 69 (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70a: check lfs setquota/quota with a pool option ========================================================== 05:40:38 (1713433238) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 hard limit 20480 limit 20 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 20480 - 0 0 0 - Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 70a (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70b: lfs setquota pool works properly ========================================================== 05:40:59 (1713433259) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed PASS 70b (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71a: Check PFL with quota pools ===== 05:41:19 (1713433279) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:100 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Waiting 90s for '' Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.102171 s, 103 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 8+0 records in 7+0 records out 8343552 bytes (8.3 MB) copied, 0.104223 s, 80.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0291281 s, 0.0 kB/s Waiting for MDT destroys to complete running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.0989757 s, 106 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=20] [seek=10] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.153965 s, 136 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=1] [seek=30] dd: error writing '/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0': No data available 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00440654 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71a.sanity-quota/f71a.sanity-quota-0] [count=10] [seek=0] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.106782 s, 98.2 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 71a (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71b: Check SEL with quota pools ===== 05:42:28 (1713433348) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:1000 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=128] 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 0.878025 s, 153 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=5] [seek=128] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0603939 s, 86.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=5] [seek=133] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0555196 s, 94.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0] [count=2] [seek=138] dd: error writing '/mnt/lustre/d71b.sanity-quota/f71b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0251416 s, 0.0 kB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 71b (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 72: lfs quota --pool prints only pool's OSTs ========================================================== 05:43:19 (1713433399) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:50 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0685124 s, 76.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0524465 s, 100 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0243132 s, 0.0 kB/s used 10240 Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 72 (42s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73a: default limits at OST Pool Quotas ========================================================== 05:44:03 (1713433443) Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 LIMIT=20480 TESTFILE=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0 qdtype=-U qh=-B qid=quota_usr qprjid=1000 qres_type=data qs=-b qtype=-u Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 set to use default quota lfs setquota: '-d' deprecated, use '-D' or '--default' set default quota get default quota Disk default usr quota: Filesystem bquota blimit bgrace iquota ilimit igrace /mnt/lustre 0 0 10 0 0 10 Test not out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=10] [oflag=sync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.714969 s, 14.7 MB/s Test out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.27807 s, 16.4 MB/s Increase default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 2.32556 s, 18.0 MB/s Set quota to override default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 21+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 1.50105 s, 14.0 MB/s Set to use default quota again lfs setquota: '-d' deprecated, use '-D' or '--default' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 2.27612 s, 18.4 MB/s Cleanup Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed PASS 73a (66s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73b: default OST Pool Quotas limit for new user ========================================================== 05:45:11 (1713433511) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 set default quota for qpool1 Write from user that hasn't lqe running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73b.sanity-quota/f73b.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.277669 s, 37.8 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 73b (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 74: check quota pools per user ====== 05:45:48 (1713433548) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Adding targets to pool oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 pool limit for qpool1 10240 pool limit for qpool2 51200 Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 74 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 75: nodemap squashed root respects quota enforcement ========================================================== 05:46:27 (1713433587) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 On MGS 192.168.204.132, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.204.132, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.204.132, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.204.132, default.squash_uid = nodemap.default.squash_uid=60000 waiting 10 secs for sync 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.296949 s, 35.3 MB/s Write to exceed soft limit 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.0103774 s, 987 kB/s mmap write when over soft limit Waiting for MDT destroys to complete Write... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.242299 s, 43.3 MB/s Write out of block quota ... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.260439 s, 40.3 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/f75.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0375611 s, 0.0 kB/s Waiting for MDT destroys to complete 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0419436 s, 25.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0404179 s, 25.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0437825 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0432633 s, 24.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0429672 s, 24.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0416015 s, 25.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0379778 s, 27.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0420859 s, 24.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0412934 s, 25.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0417108 s, 25.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.036187 s, 29.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0363399 s, 28.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0457858 s, 22.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0362288 s, 28.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0413471 s, 25.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.041285 s, 25.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0498172 s, 21.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0411126 s, 25.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0420805 s, 24.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.041303 s, 25.4 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-20': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0351899 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-21': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0336579 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-22': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0355024 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-23': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0340134 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-24': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0475641 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-25': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0357107 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-26': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0338372 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-27': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0411528 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-28': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.034519 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-29': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0343266 s, 0.0 kB/s 9+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.261135 s, 36.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0344721 s, 30.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.039652 s, 26.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0350091 s, 30.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0352978 s, 29.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0362538 s, 28.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0386737 s, 27.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0390746 s, 26.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0395253 s, 26.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0398498 s, 26.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0398627 s, 26.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0391444 s, 26.8 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-11': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0342483 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-12': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0336186 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-13': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0332704 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-14': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0349176 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-15': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0346184 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-16': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0349276 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-17': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0344075 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-18': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0362958 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0389302 s, 0.0 kB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.036945 s, 28.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0409335 s, 25.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0418699 s, 25.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0425775 s, 24.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0387897 s, 27.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0473097 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0425174 s, 24.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0440649 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0529568 s, 19.8 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/file': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.195144 s, 48.4 MB/s On MGS 192.168.204.132, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.204.132, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.204.132, active = nodemap.active=0 waiting 10 secs for sync Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 75 (132s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 76: project ID 4294967295 should be not allowed ========================================================== 05:48:42 (1713433722) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Invalid project ID: 4294967295 Change or list project attribute for specified file or directory. usage: project [-d|-r] list project ID and flags on file(s) or directories project [-p id] [-s] [-r] set project ID and/or inherit flag for specified file(s) or directories project -c [-d|-r [-p id] [-0]] check project ID and flags on file(s) or directories, print outliers project -C [-d|-r] [-k] clear the project inherit flag and ID on the file or directory Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 76 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 77: lfs setquota should fail in Lustre mount with 'ro' ========================================================== 05:48:59 (1713433739) Starting client: oleg432-client.virtnet: -o ro oleg432-server@tcp:/lustre /mnt/lustre2 lfs setquota: quotactl failed: Read-only file system setquota failed: Read-only file system PASS 77 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78A: Check fallocate increase quota usage ========================================================== 05:49:04 (1713433744) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [fallocate] [-l] [204800] [/mnt/lustre/d78A.sanity-quota/f78A.sanity-quota] kbytes returned:204 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 78A (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78a: Check fallocate increase projectid usage ========================================================== 05:49:22 (1713433762) keep default fallocate mode: 0 Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 5200 /mnt/lustre/d78a.sanity-quota kbytes returned:204 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 78a (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 79: access to non-existed dt-pool/info doesn't cause a panic ========================================================== 05:49:44 (1713433784) /tmp/f79.sanity-quota Creating new pool oleg432-server: Pool lustre.qpool1 created Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed PASS 79 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 80: check for EDQUOT after OST failover ========================================================== 05:49:57 (1713433797) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 /mnt/lustre/d80.sanity-quota/dir1 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 1 /mnt/lustre/d80.sanity-quota/dir2 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8 0 102400 - 2 0 0 - lustre-MDT0000_UUID 8 - 16384 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_loc=0xa06 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir2/f80.sanity-quota-0] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.0595627 s, 52.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-2] [count=7] 7+0 records in 7+0 records out 7340032 bytes (7.3 MB) copied, 0.0980727 s, 74.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-1] [count=1] [oflag=direct] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0185472 s, 56.5 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11272* 0 10240 - 5 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 8192* - 8192 - - - - - Total allocated inode limit: 0, total allocated block limit: 12288 Stopping /mnt/lustre-ost2 (opts:) on oleg432-server fail_loc=0 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-OST0001 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-0] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0310319 s, 67.6 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 80 (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 81: Race qmt_start_pool_recalc with qmt_pool_free ========================================================== 05:50:47 (1713433847) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg432-server: Pool lustre.qpool1 created fail_loc=0x80000A07 fail_val=10 Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Stopping /mnt/lustre-mds1 (opts:-f) on oleg432-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg432-server: oleg432-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg432-client: oleg432-server: ssh exited with exit code 1 Started lustre-MDT0000 Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 81 (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 82: verify more than 8 qids for single operation ========================================================== 05:51:25 (1713433885) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 82 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 83: Setting default quota shouldn't affect grace time ========================================================== 05:51:34 (1713433894) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 83 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 84: Reset quota should fix the insane granted quota ========================================================== 05:51:44 (1713433904) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10485760 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 osd-ldiskfs.lustre-OST0000.quota_slave.force_reint=1 0 /mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 130 0x82 0x280000401 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=60] [conv=nocreat] [oflag=direct] 60+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 1.99675 s, 31.5 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 10485760 - 2 0 0 - lustre-MDT0000_UUID 4 - 1048576 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 fail_val=0 fail_loc=0xa08 fail_val=0 fail_loc=0xa08 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 0 - 2 0 0 - lustre-MDT0000_UUID 4 - 18446744073707374604 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 fail_val=0 fail_loc=0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 0 - 2 0 0 - lustre-MDT0000_UUID 4 - 0 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61440 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61444 0 102400 - 2 0 0 - lustre-MDT0000_UUID 4* - 4 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 61440* - 61440 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 61440 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] dd: error writing '/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1': Disk quota exceeded 100+0 records in 99+0 records out 103809024 bytes (104 MB) copied, 3.3965 s, 30.6 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 101380 0 307200 - 2 0 0 - lustre-MDT0000_UUID 4* - 4 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 101376 - 102396 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 102396 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 6.4431 s, 32.5 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 84 (64s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 85: do not hung at write with the least_qunit ========================================================== 05:52:50 (1713433970) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg432-server: Pool lustre.qpool1 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg432-server: Pool lustre.qpool2 created Adding targets to pool oleg432-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0] [count=10] dd: error writing '/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0': Disk quota exceeded 8+0 records in 7+0 records out 8368128 bytes (8.4 MB) copied, 0.275839 s, 30.3 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg432-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg432-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg432-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg432-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 85 (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 86: Pre-acquired quota should be released if quota is over limit ========================================================== 05:53:40 (1713434020) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2604 (time 1713434034.25 total 10.00 last 260.32) total: 5000 create in 19.12 seconds: 261.50 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 3174 (time 1713434094.00 total 10.00 last 317.36) total: 5000 create in 16.98 seconds: 294.43 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second lfs project -sp 1000 /mnt/lustre/d86.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2576 (time 1713434157.36 total 10.00 last 257.59) total: 5000 create in 19.32 seconds: 258.85 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete PASS 86 (198s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity-quota test complete, duration 4586 sec ========= 05:57:01 (1713434221) === sanity-quota: start cleanup 05:57:01 (1713434221) === === sanity-quota: finish cleanup 05:57:01 (1713434221) ===