-----============= acceptance-small: sanity-quota ============----- Thu Apr 18 20:16:17 EDT 2024 excepting tests: 2 4a 63 65 skipping tests SLOW=no: 61 12a 9 === sanity-quota: start setup 20:16:20 (1713485780) === oleg145-client.virtnet: executing check_config_client /mnt/lustre oleg145-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg145-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b58fb000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b58fb000.idle_timeout=debug oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all === sanity-quota: finish setup 20:16:27 (1713485787) === using SAVE_PROJECT_SUPPORTED=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [true] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d0_runas_test/f6903] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [true] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [touch] [/mnt/lustre/d0_runas_test/f6903] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 0: Test basic quota performance ===== 20:16:40 (1713485800) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.405273 s, 25.9 MB/s Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.365348 s, 28.7 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 0 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1a: Block hard limit (normal use and out of quota) ========================================================== 20:17:14 (1713485834) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.252293 s, 20.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.211222 s, 19.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0545809 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.244044 s, 21.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.280721 s, 14.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0546052 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:10 mb) lfs project -p 1000 /mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.18594 s, 28.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.239568 s, 17.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0585833 s, 0.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1a (109s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1b: Quota pools: Block hard limit (normal use and out of quota) ========================================================== 20:19:04 (1713485944) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.157495 s, 33.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 4968448 bytes (5.0 MB) copied, 0.161921 s, 30.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0457307 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:20 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.192015 s, 27.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.148344 s, 35.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0465737 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.197587 s, 26.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.1491 s, 35.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.046477 s, 0.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1b (119s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1c: Quota pools: check 3 pools with hardlimit only for global ========================================================== 20:21:04 (1713486064) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg145-server: Pool lustre.qpool2 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.372662 s, 28.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] [seek=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.555899 s, 18.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0494585 s, 0.0 kB/s qpool1 used 20484 qpool2 used 20484 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg145-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1c (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1d: Quota pools: check block hardlimit on different pools ========================================================== 20:22:14 (1713486134) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg145-server: Pool lustre.qpool2 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.157453 s, 33.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.125725 s, 41.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0450918 s, 0.0 kB/s qpool1 used 10244 qpool2 used 10244 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg145-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1d (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1e: Quota pools: global pool high block limit vs quota pool with small ========================================================== 20:23:24 (1713486204) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:53000000 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0001_UUID ' Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.251554 s, 20.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.202006 s, 26.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0533245 s, 0.0 kB/s Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-1] [count=20] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.607953 s, 34.5 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1e (57s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1f: Quota pools: correct qunit after removing/adding OST ========================================================== 20:24:23 (1713486263) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.180527 s, 29.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.163183 s, 32.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0530674 s, 0.0 kB/s Removing lustre-OST0000_UUID from qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.228454 s, 22.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.201723 s, 26.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0558874 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1f (75s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1g: Quota pools: Block hard limit with wide striping ========================================================== 20:25:40 (1713486340) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 osc.lustre-OST0000-osc-ffff8800b58fb000.max_dirty_mb=1 osc.lustre-OST0001-osc-ffff8800b58fb000.max_dirty_mb=1 User quota (block hardlimit:40 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.56915 s, 6.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 1.79147 s, 5.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=6] [seek=20] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1085440 bytes (1.1 MB) copied, 0.288438 s, 3.8 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed osc.lustre-OST0000-osc-ffff8800b58fb000.max_dirty_mb=467 osc.lustre-OST0001-osc-ffff8800b58fb000.max_dirty_mb=467 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1g (56s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1h: Block hard limit test using fallocate ========================================================== 20:26:38 (1713486398) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity-quota test_1h need >= 2.13.57 and ldiskfs for fallocate SKIP 1h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1i: Quota pools: different limit and usage relations ========================================================== 20:26:40 (1713486400) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.217027 s, 24.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.184574 s, 28.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0514247 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10244 0 0 - 1 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 10244* - 10244 - - - - - Total allocated inode limit: 0, total allocated block limit: 10244 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.195689 s, 26.8 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.250817 s, 20.9 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.213158 s, 24.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0533813 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.166039 s, 18.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [count=3] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 2+0 records in 1+0 records out 1433600 bytes (1.4 MB) copied, 0.130235 s, 11.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [seek=3] [count=1] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0469563 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1i (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1j: Enable project quota enforcement for root ========================================================== 20:27:55 (1713486475) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0 osd-zfs.lustre-OST0000.quota_slave.root_prj_enable=1 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.692377 s, 28.8 MB/s running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=10] [seek=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0260101 s, 0.0 kB/s osd-zfs.lustre-OST0000.quota_slave.root_prj_enable=0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [seek=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.65595 s, 32.0 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete osd-zfs.lustre-OST0000.quota_slave.root_prj_enable=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1j (37s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_2 skipping excluded test 2 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3a: Block soft limit (start timer, timer goes off, stop timer) ========================================================== 20:28:34 (1713486514) User quota (soft limit:4 MB grace:60 seconds) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.204457 s, 20.5 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.177989 s, 57.5 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5189* 4096 0 58s 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 6278 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5189 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.170789 s, 60.0 kB/s Grace time is 57s Sleep through grace ... ...sleep 62 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 7302 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.021429 s, 47.8 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00519052 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 7302 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 4096 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 130 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.180386 s, 23.3 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Group quota (soft limit:4 MB grace:60 seconds) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.207241 s, 20.2 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.15234 s, 67.2 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129* 4096 0 59s 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00431706 s, 2.4 MB/s Grace time is 58s Sleep through grace ... ...sleep 63 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00770073 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00597112 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 4096 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 66 - 1090 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1090 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.161573 s, 26.0 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Project quota (soft limit:4 MB grace:60 sec) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.19004 s, 22.1 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.152387 s, 67.2 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 6 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 6 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124* 4096 0 58s 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.143055 s, 71.6 kB/s Grace time is 58s Sleep through grace ... ...sleep 63 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6149* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0218389 s, 46.9 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00564234 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7173* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 4096 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.189162 s, 22.2 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 3a (358s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3b: Quota pools: Block soft limit (start timer, expires, stop timer) ========================================================== 20:34:34 (1713486874) limit 4 glbl_limit 8 grace 60 glbl_grace 120 User quota in qpool1(soft limit:4 MB grace:60 seconds) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0973678 s, 43.1 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.14502 s, 70.6 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00347226 s, 2.9 MB/s Quota info for qpool1: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129* 4096 0 59s 0 0 0 - Grace time is 59s Sleep through grace ... ...sleep 64 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00579376 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00486395 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 8192 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 66 - 1090 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1090 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.12433 s, 33.7 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Group quota in qpool1(soft limit:4 MB grace:60 seconds) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.152788 s, 27.5 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.150435 s, 68.1 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 0 - 41 - - - - - Total allocated inode limit: 0, total allocated block limit: 6189 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.134994 s, 75.9 kB/s Quota info for qpool1: Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6148* 4096 0 58s 0 0 0 - Grace time is 58s Sleep through grace ... ...sleep 63 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 66* - 66 - - - - - Total allocated inode limit: 0, total allocated block limit: 7238 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0231934 s, 44.2 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00475545 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 66* - 66 - - - - - Total allocated inode limit: 0, total allocated block limit: 7238 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 8192 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66* - 66 - - - - - Total allocated inode limit: 0, total allocated block limit: 66 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.13157 s, 31.9 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Project quota in qpool1(soft:4 MB grace:60 sec) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.131872 s, 31.8 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.14368 s, 71.3 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.138553 s, 73.9 kB/s Quota info for qpool1: Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6148* 4096 0 58s 0 0 0 - Grace time is 58s Sleep through grace ... ...sleep 63 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6149 8192 0 - 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0176393 s, 58.1 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00504803 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7237 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7237 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7173 8192 0 - 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.130642 s, 32.1 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' PASS 3b (371s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3c: Quota pools: check block soft limit on different pools ========================================================== 20:40:46 (1713487246) limit 4 limit2 8 glbl_limit 12 grace1 70 grace2 60 glbl_grace 80 User quota in qpool2(soft:8 MB grace:60 seconds) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg145-server: Pool lustre.qpool2 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.265477 s, 31.6 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=8192] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.136068 s, 75.3 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9285 12288 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9220 - 10244 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 10374 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9285 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9220 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=9216] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.129681 s, 79.0 kB/s Quota info for qpool2: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10309* 8192 0 58s 0 0 0 - Grace time is 58s Sleep through grace ... ...sleep 63 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10311 12288 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 10244 - 11268 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 11398 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10311 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 10244 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=4096] [seek=10240] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0190682 s, 53.7 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=14336] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00735424 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11335 12288 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 11268* - 11268 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 11398 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11335 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 11268 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 12288 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 130 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.311935 s, 26.9 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg145-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' PASS 3c (140s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_4a skipping excluded test 4a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 4b: Grace time strings handling ===== 20:43:08 (1713487388) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Valid grace strings test Block grace time: 1w3d; Inode grace time: 16m40s Block grace time: 5s; Inode grace time: 1w2d3h4m5s Invalid grace strings test lfs: bad inode-grace: 5c setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: 18446744073709551615 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: -1 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 4b (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 5: Chown & chgrp successfully even out of block/file quota ========================================================== 20:43:17 (1713487397) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set quota limit (0 10M 0 10) for quota_usr.quota_usr lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Create more than 10 files and more than 10 MB ... total: 11 create in 0.02 seconds: 451.04 ops/second lfs project -p 1000 /mnt/lustre/d5.sanity-quota/f5.sanity-quota-0_1 11+0 records in 11+0 records out 11534336 bytes (12 MB) copied, 0.239557 s, 48.1 MB/s Chown files to quota_usr.quota_usr ... - unlinked 0 (time 1713487412 ; total 0 ; last 0) total: 11 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 5 (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 6: Test dropping acquire request on master ========================================================== 20:43:52 (1713487432) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.051473 s, 20.4 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.02285 s, 45.9 MB/s at_max=20 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0714977 s, 14.7 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete fail_val=601 fail_loc=0x513 osd-zfs.lustre-OST0000.quota_slave.timeout=10 osd-zfs.lustre-OST0001.quota_slave.timeout=10 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.202234 s, 15.6 MB/s Sleep for 41 seconds ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] at_max=600 fail_val=0 fail_loc=0 dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 3+0 records in 2+0 records out 3129344 bytes (3.1 MB) copied, 57.995 s, 54.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 6 (105s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7a: Quota reintegration (global index) ========================================================== 20:45:39 (1713487539) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg145-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Start ost1... Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.256177 s, 16.4 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg145-server Start ost1... Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1470 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1465 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1460 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1455 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1450 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1445 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1440 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1435 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1430 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1425 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1420 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1415 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1410 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: RECOVERING oleg145-server: Waiting 1405 secs for *.lustre-OST0000.recovery_status recovery done. status: RECOVERING oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.404853 s, 15.5 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7a (148s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7b: Quota reintegration (slave index) ========================================================== 20:48:09 (1713487689) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.086313 s, 12.1 MB/s fail_val=0 fail_loc=0xa02 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [seek=1] [oflag=sync] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0667941 s, 15.7 MB/s fail_val=0 fail_loc=0 Restart ost to trigger reintegration... Stopping /mnt/lustre-ost1 (opts:) on oleg145-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7b (56s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7c: Quota reintegration (restart mds during reintegration) ========================================================== 20:49:07 (1713487747) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 3s: want 'none' got 'none' fail_val=0 fail_loc=0xa03 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' osd-zfs.lustre-OST0000.quota_slave.force_reint=1 osd-zfs.lustre-OST0001.quota_slave.force_reint=1 Stop mds... Stopping /mnt/lustre-mds1 (opts:) on oleg145-server fail_val=0 fail_loc=0 Start mds... Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE Waiting 200s for 'glb[1],slv[1],reint[0]' Waiting 190s for 'glb[1],slv[1],reint[0]' Waiting 180s for 'glb[1],slv[1],reint[0]' Waiting 170s for 'glb[1],slv[1],reint[0]' Waiting 160s for 'glb[1],slv[1],reint[0]' Waiting 130s for 'glb[1],slv[1],reint[0]' Waiting 120s for 'glb[1],slv[1],reint[0]' Waiting 110s for 'glb[1],slv[1],reint[0]' Waiting 100s for 'glb[1],slv[1],reint[0]' Waiting 90s for 'glb[1],slv[1],reint[0]' Updated after 111s: want 'glb[1],slv[1],reint[0]' got 'glb[1],slv[1],reint[0]' affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.325233 s, 12.9 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7c (156s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7d: Quota reintegration (Transfer index in multiple bulks) ========================================================== 20:51:44 (1713487904) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' fail_val=0 fail_loc=0x608 Waiting 90s for 'u' affected facets: ost1 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg145-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg145-server: oleg145-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg145-server: *.lustre-OST0001.recovery_status status: INACTIVE fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 1.33488 s, 14.1 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 1.65887 s, 11.4 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7d (38s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7e: Quota reintegration (inode limits) ========================================================== 20:52:24 (1713487944) SKIP: sanity-quota test_7e needs >= 2 MDTs SKIP 7e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 8: Run dbench with quota enabled ==== 20:52:27 (1713487947) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Set enough high limit for user: quota_usr Set enough high limit for group: quota_usr lfs project -sp 1000 /mnt/lustre/d8.sanity-quota Set enough high limit for project: 1000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [bash] [rundbench] [-D] [/mnt/lustre/d8.sanity-quota] [3] [-t] [120] looking for dbench program /usr/bin/dbench found dbench client file /usr/share/dbench/client.txt '/usr/share/dbench/client.txt' -> 'client.txt' running 'dbench 3 -t 120' on /mnt/lustre/d8.sanity-quota at Thu Apr 18 20:52:38 EDT 2024 waiting for dbench pid 18538 dbench version 4.00 - Copyright Andrew Tridgell 1999-2004 Running for 120 seconds with load 'client.txt' and minimum warmup 24 secs failed to create barrier semaphore 2 of 3 processes prepared for launch 0 sec 3 of 3 processes prepared for launch 0 sec releasing clients 3 308 33.93 MB/sec warmup 1 sec latency 17.095 ms 3 664 31.73 MB/sec warmup 2 sec latency 47.753 ms 3 952 21.80 MB/sec warmup 3 sec latency 105.064 ms 3 1244 17.31 MB/sec warmup 4 sec latency 27.225 ms 3 1706 15.74 MB/sec warmup 5 sec latency 42.978 ms 3 2179 13.38 MB/sec warmup 6 sec latency 39.530 ms 3 2581 12.41 MB/sec warmup 7 sec latency 54.119 ms 3 3297 13.26 MB/sec warmup 8 sec latency 47.401 ms 3 3813 13.25 MB/sec warmup 9 sec latency 19.526 ms 3 4028 12.02 MB/sec warmup 10 sec latency 18.685 ms 3 4261 10.99 MB/sec warmup 11 sec latency 87.492 ms 3 4552 10.26 MB/sec warmup 12 sec latency 60.848 ms 3 5054 10.42 MB/sec warmup 13 sec latency 45.605 ms 3 5512 9.76 MB/sec warmup 14 sec latency 29.789 ms 3 5997 9.44 MB/sec warmup 15 sec latency 37.458 ms 3 6570 9.87 MB/sec warmup 16 sec latency 37.880 ms 3 7242 10.21 MB/sec warmup 17 sec latency 33.272 ms 3 7563 9.86 MB/sec warmup 18 sec latency 12.897 ms 3 7866 9.41 MB/sec warmup 19 sec latency 75.567 ms 3 8217 9.18 MB/sec warmup 20 sec latency 35.102 ms 3 8559 9.18 MB/sec warmup 21 sec latency 21.119 ms 3 8923 8.81 MB/sec warmup 22 sec latency 63.424 ms 3 9442 8.62 MB/sec warmup 23 sec latency 30.937 ms 3 10568 15.79 MB/sec execute 1 sec latency 45.416 ms 3 10948 11.43 MB/sec execute 2 sec latency 15.951 ms 3 11190 7.94 MB/sec execute 3 sec latency 22.647 ms 3 11493 6.39 MB/sec execute 4 sec latency 88.587 ms 3 11892 6.03 MB/sec execute 5 sec latency 29.269 ms 3 12377 6.62 MB/sec execute 6 sec latency 36.543 ms 3 12880 5.90 MB/sec execute 7 sec latency 35.598 ms 3 13360 6.73 MB/sec execute 8 sec latency 34.669 ms 3 14006 7.47 MB/sec execute 9 sec latency 36.216 ms 3 14484 8.02 MB/sec execute 10 sec latency 21.314 ms 3 14835 7.40 MB/sec execute 11 sec latency 11.800 ms 3 15071 6.93 MB/sec execute 12 sec latency 83.359 ms 3 15459 6.74 MB/sec execute 13 sec latency 34.687 ms 3 15933 6.93 MB/sec execute 14 sec latency 40.734 ms 3 16416 6.57 MB/sec execute 15 sec latency 37.251 ms 3 16882 6.94 MB/sec execute 16 sec latency 34.172 ms 3 17515 7.30 MB/sec execute 17 sec latency 34.088 ms 3 17989 7.64 MB/sec execute 18 sec latency 15.357 ms 3 18267 7.29 MB/sec execute 19 sec latency 18.454 ms 3 18593 7.01 MB/sec execute 20 sec latency 80.972 ms 3 18972 6.90 MB/sec execute 21 sec latency 30.503 ms 3 19449 7.02 MB/sec execute 22 sec latency 42.837 ms 3 19930 6.78 MB/sec execute 23 sec latency 25.349 ms 3 20348 6.77 MB/sec execute 24 sec latency 39.732 ms 3 21035 7.23 MB/sec execute 25 sec latency 39.396 ms 3 21529 7.46 MB/sec execute 26 sec latency 23.986 ms 3 21853 7.26 MB/sec execute 27 sec latency 13.262 ms 3 22170 7.07 MB/sec execute 28 sec latency 84.141 ms 3 22521 6.97 MB/sec execute 29 sec latency 40.977 ms 3 22997 7.06 MB/sec execute 30 sec latency 45.640 ms 3 23462 6.88 MB/sec execute 31 sec latency 26.824 ms 3 23870 6.85 MB/sec execute 32 sec latency 36.040 ms 3 24519 7.19 MB/sec execute 33 sec latency 49.041 ms 3 25049 7.39 MB/sec execute 34 sec latency 31.388 ms 3 25369 7.24 MB/sec execute 35 sec latency 14.431 ms 3 25640 7.07 MB/sec execute 36 sec latency 99.548 ms 3 25970 6.98 MB/sec execute 37 sec latency 33.916 ms 3 26438 7.08 MB/sec execute 38 sec latency 30.318 ms 3 26833 6.92 MB/sec execute 39 sec latency 37.038 ms 3 27297 6.88 MB/sec execute 40 sec latency 30.266 ms 3 27897 7.09 MB/sec execute 41 sec latency 36.259 ms 3 28493 7.29 MB/sec execute 42 sec latency 31.419 ms 3 28847 7.23 MB/sec execute 43 sec latency 11.017 ms 3 29161 7.09 MB/sec execute 44 sec latency 60.639 ms 3 29478 7.01 MB/sec execute 45 sec latency 34.401 ms 3 29950 7.09 MB/sec execute 46 sec latency 36.751 ms 3 30342 6.96 MB/sec execute 47 sec latency 38.452 ms 3 30805 6.91 MB/sec execute 48 sec latency 38.065 ms 3 31334 7.08 MB/sec execute 49 sec latency 35.007 ms 3 31956 7.21 MB/sec execute 50 sec latency 33.491 ms 3 32340 7.21 MB/sec execute 51 sec latency 11.187 ms 3 32653 7.09 MB/sec execute 52 sec latency 80.535 ms 3 32961 7.03 MB/sec execute 53 sec latency 47.848 ms 3 33402 7.05 MB/sec execute 54 sec latency 37.199 ms 3 33872 6.99 MB/sec execute 55 sec latency 35.884 ms 3 34336 6.93 MB/sec execute 56 sec latency 35.809 ms 3 34878 7.09 MB/sec execute 57 sec latency 36.654 ms 3 35512 7.21 MB/sec execute 58 sec latency 25.629 ms 3 35894 7.21 MB/sec execute 59 sec latency 14.122 ms 3 36207 7.10 MB/sec execute 60 sec latency 86.516 ms 3 36502 7.05 MB/sec execute 61 sec latency 51.659 ms 3 36934 7.06 MB/sec execute 62 sec latency 40.198 ms 3 37388 7.01 MB/sec execute 63 sec latency 32.652 ms 3 37844 6.96 MB/sec execute 64 sec latency 40.382 ms 3 38360 7.06 MB/sec execute 65 sec latency 47.169 ms 3 38973 7.19 MB/sec execute 66 sec latency 33.401 ms 3 39373 7.20 MB/sec execute 67 sec latency 16.807 ms 3 39684 7.11 MB/sec execute 68 sec latency 90.241 ms 3 39991 7.03 MB/sec execute 69 sec latency 41.547 ms 3 40455 7.07 MB/sec execute 70 sec latency 40.872 ms 3 40887 7.03 MB/sec execute 71 sec latency 47.517 ms 3 41383 6.98 MB/sec execute 72 sec latency 36.623 ms 3 41901 7.07 MB/sec execute 73 sec latency 36.141 ms 3 42541 7.19 MB/sec execute 74 sec latency 38.021 ms 3 42948 7.20 MB/sec execute 75 sec latency 16.976 ms 3 43263 7.12 MB/sec execute 76 sec latency 78.137 ms 3 43572 7.07 MB/sec execute 77 sec latency 48.654 ms 3 44042 7.08 MB/sec execute 78 sec latency 39.102 ms 3 44515 7.05 MB/sec execute 79 sec latency 33.078 ms 3 44984 7.01 MB/sec execute 80 sec latency 29.624 ms 3 45633 7.13 MB/sec execute 81 sec latency 35.334 ms 3 46191 7.20 MB/sec execute 82 sec latency 30.741 ms 3 46558 7.20 MB/sec execute 83 sec latency 11.952 ms 3 46897 7.13 MB/sec execute 84 sec latency 64.442 ms 3 47218 7.09 MB/sec execute 85 sec latency 38.717 ms 3 47714 7.13 MB/sec execute 86 sec latency 30.182 ms 3 48161 7.06 MB/sec execute 87 sec latency 26.866 ms 3 48647 7.05 MB/sec execute 88 sec latency 30.375 ms 3 49292 7.17 MB/sec execute 89 sec latency 32.968 ms 3 49837 7.25 MB/sec execute 90 sec latency 33.115 ms 3 50058 7.19 MB/sec execute 91 sec latency 19.574 ms 3 50304 7.12 MB/sec execute 92 sec latency 98.190 ms 3 50608 7.07 MB/sec execute 93 sec latency 43.135 ms 3 51071 7.10 MB/sec execute 94 sec latency 37.661 ms 3 51505 7.06 MB/sec execute 95 sec latency 35.616 ms 3 52009 7.03 MB/sec execute 96 sec latency 31.723 ms 3 52531 7.09 MB/sec execute 97 sec latency 41.099 ms 3 53200 7.19 MB/sec execute 98 sec latency 36.297 ms 3 53585 7.19 MB/sec execute 99 sec latency 21.894 ms 3 53829 7.13 MB/sec execute 100 sec latency 73.215 ms 3 54116 7.07 MB/sec execute 101 sec latency 93.611 ms 3 54572 7.10 MB/sec execute 102 sec latency 35.410 ms 3 55009 7.07 MB/sec execute 103 sec latency 45.544 ms 3 55498 7.04 MB/sec execute 104 sec latency 42.444 ms 3 56059 7.10 MB/sec execute 105 sec latency 30.719 ms 3 56727 7.18 MB/sec execute 106 sec latency 32.220 ms 3 57028 7.18 MB/sec execute 107 sec latency 17.820 ms 3 57206 7.12 MB/sec execute 108 sec latency 20.886 ms 3 57426 7.07 MB/sec execute 109 sec latency 115.265 ms 3 57763 7.04 MB/sec execute 110 sec latency 35.564 ms 3 58232 7.05 MB/sec execute 111 sec latency 34.192 ms 3 58686 7.02 MB/sec execute 112 sec latency 31.366 ms 3 59153 7.00 MB/sec execute 113 sec latency 33.892 ms 3 59778 7.06 MB/sec execute 114 sec latency 33.845 ms 3 60362 7.13 MB/sec execute 115 sec latency 31.504 ms 3 60746 7.13 MB/sec execute 116 sec latency 21.281 ms 3 61039 7.08 MB/sec execute 117 sec latency 64.681 ms 3 61405 7.05 MB/sec execute 118 sec latency 33.648 ms 3 61849 7.06 MB/sec execute 119 sec latency 39.706 ms 3 cleanup 120 sec 0 cleanup 120 sec Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 27294 6.197 37.176 Close 20007 1.160 14.820 Rename 1149 8.264 21.723 Unlink 5533 3.643 22.308 Qpathinfo 24672 1.561 14.758 Qfileinfo 4267 0.360 2.646 Qfsinfo 4529 3.934 12.151 Sfileinfo 2202 4.319 12.322 Find 9532 0.664 16.060 WriteX 13419 1.524 22.636 ReadX 42501 0.058 2.287 LockX 86 1.032 1.685 UnlockX 86 1.152 3.561 Flush 1917 20.976 115.258 Throughput 7.05511 MB/sec 3 clients 3 procs max_latency=115.265 ms stopping dbench on /mnt/lustre/d8.sanity-quota at Thu Apr 18 20:55:02 EDT 2024 with return code 0 clean dbench files on /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota removed directory: 'clients/client1/~dmtmp/PWRPNT' removed directory: 'clients/client1/~dmtmp/PARADOX' removed directory: 'clients/client1/~dmtmp/COREL' removed directory: 'clients/client1/~dmtmp/PM' removed directory: 'clients/client1/~dmtmp/WORDPRO' removed directory: 'clients/client1/~dmtmp/ACCESS' removed directory: 'clients/client1/~dmtmp/SEED' removed directory: 'clients/client1/~dmtmp/WORD' removed directory: 'clients/client1/~dmtmp/EXCEL' removed directory: 'clients/client1/~dmtmp' removed directory: 'clients/client1' removed directory: 'clients/client2/~dmtmp/WORD' removed directory: 'clients/client2/~dmtmp/WORDPRO' removed directory: 'clients/client2/~dmtmp/PM' removed directory: 'clients/client2/~dmtmp/COREL' removed directory: 'clients/client2/~dmtmp/ACCESS' removed directory: 'clients/client2/~dmtmp/SEED' removed directory: 'clients/client2/~dmtmp/PARADOX' removed directory: 'clients/client2/~dmtmp/EXCEL' removed directory: 'clients/client2/~dmtmp/PWRPNT' removed directory: 'clients/client2/~dmtmp' removed directory: 'clients/client2' removed directory: 'clients/client0/~dmtmp/PWRPNT' removed directory: 'clients/client0/~dmtmp/WORDPRO' removed directory: 'clients/client0/~dmtmp/EXCEL' removed directory: 'clients/client0/~dmtmp/PM' removed directory: 'clients/client0/~dmtmp/WORD' removed directory: 'clients/client0/~dmtmp/ACCESS' removed directory: 'clients/client0/~dmtmp/PARADOX' removed directory: 'clients/client0/~dmtmp/SEED' removed directory: 'clients/client0/~dmtmp/COREL' removed directory: 'clients/client0/~dmtmp' removed directory: 'clients/client0' removed directory: 'clients' removed 'client.txt' /mnt/lustre/d8.sanity-quota dbench successfully finished lfs project -C /mnt/lustre/d8.sanity-quota Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 8 (176s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_9 skipping SLOW test 9 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 10: Test quota for root user ======== 20:55:25 (1713488125) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 2048 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d10.sanity-quota/f10.sanity-quota] [count=3] [oflag=sync] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.271664 s, 11.6 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 10 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 11: Chown/chgrp ignores quota ======= 20:55:58 (1713488158) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' lfs setquota: warning: inode hardlimit '1' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 1 - lustre-MDT0000_UUID 0 - 0 - 0 - 1 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 1, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 11 (30s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_12a skipping SLOW test 12a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12b: Inode quota rebalancing ======== 20:56:29 (1713488189) SKIP: sanity-quota test_12b needs >= 2 MDTs SKIP 12b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 13: Cancel per-ID lock in the LRU list ========================================================== 20:56:32 (1713488192) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 2s: want 'u' got 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d13.sanity-quota/f13.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.100174 s, 10.5 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 13 (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 14: check panic in qmt_site_recalc_cb ========================================================== 20:57:10 (1713488230) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Creating new pool oleg145-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d14.sanity-quota/f14.sanity-quota-0] [count=10] [oflag=direct] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.283151 s, 37.0 MB/s Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg145-server Removing lustre-OST0000_UUID from qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 14 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 15: Set over 4T block quota ========= 20:57:49 (1713488269) sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 15 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16a: lfs quota should skip the inactive MDT/OST ========================================================== 20:58:01 (1713488281) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d16a.sanity-quota/f16a.sanity-quota] [count=50] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.22138 s, 42.9 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 512000 - 0 0 10240 - Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 512000 - 0 0 10240 - lustre-MDT0000_UUID 0 - 0 - 0 - 4096 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 5124 - 65536 - - - - - Total allocated inode limit: 4096, total allocated block limit: 65536 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 512000 - 0 0 10240 - lustre-MDT0000_UUID 0 - 0 - 0 - 4096 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 5124 - 65536 - - - - - Total allocated inode limit: 4096, total allocated block limit: 65536 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 16a (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16b: lfs quota should skip the nonexistent MDT/OST ========================================================== 20:58:28 (1713488308) SKIP: sanity-quota test_16b needs >= 3 MDTs SKIP 16b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 17: DQACQ return recoverable error == 20:58:31 (1713488311) DQACQ return -ENOLCK sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=37 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.0866 s, 340 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete DQACQ return -EAGAIN sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=11 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.06501 s, 342 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete DQACQ return -ETIMEDOUT sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=110 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.07547 s, 341 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete DQACQ return -ENOTCONN sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=107 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.13732 s, 334 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 17 (174s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 18: MDS failover while writing, no watchdog triggered (b14840) ========================================================== 21:01:28 (1713488488) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 3s: want 'u' got 'u' User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (buffered) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 4352 2204288 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3750912 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7517184 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg145-server Stopping /mnt/lustre-mds1 (opts:) on oleg145-server 21:01:43 (1713488503) shut down Failover mds1 to oleg145-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 21:01:55 (1713488515) targets are mounted 21:01:55 (1713488515) facet_failover done oleg145-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 24.2239 s, 4.3 MB/s (dd_pid=6674, time=5, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102406 0 204800 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 102405 - 114688 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 114688 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (directio) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] [oflag=direct] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210560 3840 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3757056 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7523328 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg145-server Stopping /mnt/lustre-mds1 (opts:) on oleg145-server 21:02:37 (1713488557) shut down Failover mds1 to oleg145-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 21:02:50 (1713488570) targets are mounted 21:02:50 (1713488570) facet_failover done oleg145-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 23.6673 s, 4.4 MB/s (dd_pid=9136, time=5, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102406 0 204800 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 102405 - 107525 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 107525 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 18 (124s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 19: Updating admin limits doesn't zero operational limits(b14790) ========================================================== 21:03:33 (1713488613) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set user quota (limit: 5M) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 1 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Files for user (quota_usr), count=1: File: '/mnt/lustre/d19.sanity-quota/f19.sanity-quota' Size: 0 Blocks: 1 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205272509109 Links: 1 Access: (0644/-rw-r--r--) Uid: (60000/quota_usr) Gid: (60000/quota_usr) Access: 2024-04-18 21:03:43.000000000 -0400 Modify: 2024-04-18 21:03:43.000000000 -0400 Change: 2024-04-18 21:03:43.000000000 -0400 Birth: - Block quota isn't 0 (u:quota_usr:2). Update quota limits Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 1 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 1 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Files for user (quota_usr), count=1: File: '/mnt/lustre/d19.sanity-quota/f19.sanity-quota' Size: 0 Blocks: 1 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205272509109 Links: 1 Access: (0644/-rw-r--r--) Uid: (60000/quota_usr) Gid: (60000/quota_usr) Access: 2024-04-18 21:03:43.000000000 -0400 Modify: 2024-04-18 21:03:43.000000000 -0400 Change: 2024-04-18 21:03:43.000000000 -0400 Birth: - Block quota isn't 0 (u:quota_usr:2). running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.14187 s, 29.6 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4101 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 4100 - 5118 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 5118 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] [seek=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0395615 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4101 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 4100 - 5118 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 5118 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 19 (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 20: Test if setquota specifiers work properly (b15754) ========================================================== 21:04:07 (1713488647) PASS 20 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 21: Setquota while writing & deleting (b16053) ========================================================== 21:04:15 (1713488655) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set limit(block:10G; file:1000000) for user: quota_usr Set limit(block:10G; file:1000000) for group: quota_usr lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set limit(block:10G; file:) for project: 1000 lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set quota for 1 times Set quota for 2 times Set quota for 3 times Set quota for 4 times Set quota for 5 times Set quota for 6 times Set quota for 7 times Set quota for 8 times Set quota for 9 times Set quota for 10 times Set quota for 11 times Set quota for 12 times Set quota for 13 times Set quota for 14 times Set quota for 15 times Set quota for 16 times Set quota for 17 times Set quota for 18 times Set quota for 19 times Set quota for 20 times Set quota for 21 times (dd_pid=16417, time=0)successful (dd_pid=16418, time=2)successful Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 21 (61s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 22: enable/disable quota by 'lctl conf_param/set_param -P' ========================================================== 21:05:17 (1713488717) Set both mdt & ost quota type as ug Waiting 90s for 'ugp' Restart... Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:) Stopping client oleg145-client.virtnet /mnt/lustre opts: Stopping clients: oleg145-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg145-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11825) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg145-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg145-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg145-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42115/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg145-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions oleg145-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012b101000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012b101000.idle_timeout=debug Verify if quota is enabled Set both mdt & ost quota type as none Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Waiting 90s for 'none' Restart... Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:) Stopping client oleg145-client.virtnet /mnt/lustre opts: Stopping clients: oleg145-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg145-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11825) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg145-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg145-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg145-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42115/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg145-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a3d16000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a3d16000.idle_timeout=debug Verify if quota is disabled PASS 22 (92s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 23: Quota should be honored with directIO (b16125) ========================================================== 21:06:50 (1713488810) SKIP: sanity-quota test_23 Overwrite in place is not guaranteed to be space neutral on ZFS SKIP 23 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 24: lfs draws an asterix when limit is reached (b16646) ========================================================== 21:06:52 (1713488812) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set user quota (limit: 5M) running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d24.sanity-quota/f24.sanity-quota] [count=6] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.173883 s, 36.2 MB/s /mnt/lustre 6149* 0 5120 - 1 0 0 - 2* - 2 - 1 - 0 - 6148* - 6148 - - - - - Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 24 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 25: check indexes versions ========== 21:07:28 (1713488848) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.192055 s, 27.3 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.17219 s, 24.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0895941 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 25 (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27a: lfs quota/setquota should handle wrong arguments (b19612) ========================================================== 21:08:23 (1713488903) lfs quota: name and mount point must be specified Display disk usage and limits. usage: quota [-q] [-v] [-h] [-o OBD_UUID|-i MDT_IDX|-I OST_IDX] [{-u|-g|-p} UNAME|UID|GNAME|GID|PROJID] [--pool ] quota -t <-u|-g|-p> [--pool ] quota [-q] [-v] [h] {-U|-G|-P} [--pool ] quota -a {-u|-g|-p} [-s start_qid] [-e end_qid] lfs setquota: either -u or -g must be specified setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 27a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27b: lfs quota/setquota should handle user/group/project ID (b20200) ========================================================== 21:08:28 (1713488908) lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr 60000 (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp 60000 (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 PASS 27b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27c: lfs quota should support human-readable output ========================================================== 21:08:34 (1713488914) PASS 27c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27d: lfs setquota should support fraction block limit ========================================================== 21:08:40 (1713488920) PASS 27d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 30: Hard limit updates should not reset grace times ========================================================== 21:08:45 (1713488925) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.486257 s, 17.3 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8197* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 8196 - 9220 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9220 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.111673 s, 9.4 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9221* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 9220* - 9220 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9220 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0463509 s, 0.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 30 (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 33: Basic usage tracking for user & group & project ========================================================== 21:09:27 (1713488967) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write files... lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-0 Iteration 0/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-1 Iteration 1/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-2 Iteration 2/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-3 Iteration 3/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-4 Iteration 4/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-5 Iteration 5/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-6 Iteration 6/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-7 Iteration 7/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-8 Iteration 8/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-9 Iteration 9/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-10 Iteration 10/10 completed Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage after write Verify inode usage after write Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage after delete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 33 (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 34: Usage transfer for user & group & project ========================================================== 21:10:38 (1713489038) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... chown the file to user 60000 Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage for user 60000 chgrp the file to group 60000 Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage for group 60000 chown the file to user 60001 Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete change_project project id to 1000 lfs project -p 1000 /mnt/lustre/d34.sanity-quota/f34.sanity-quota Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage for user 60001/60000 and group 60000 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 34 (108s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 35: Usage is still accessible across reboot ========================================================== 21:12:28 (1713489148) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... lfs project -p 1000 /mnt/lustre/d35.sanity-quota/f35.sanity-quota Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Save disk usage before restart User 60000: 2052KB 1 inodes Group 60000: 2052KB 1 inodes Project 1000: 2052KB 1 inodes Restart... Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:) Stopping client oleg145-client.virtnet /mnt/lustre opts: Stopping clients: oleg145-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg145-server Checking servers environments Checking clients oleg145-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a3e1e000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a3e1e000.idle_timeout=debug affected facets: Verify disk usage after restart Append to the same file... Verify space usage is increased Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 35 (99s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 37: Quota accounted properly for file created by 'lfs setstripe' ========================================================== 21:14:09 (1713489249) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.146206 s, 7.2 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 37 (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 38: Quota accounting iterator doesn't skip id entries ========================================================== 21:15:00 (1713489300) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Create 10000 files... Found 10000 id entries Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 38 (488s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 39: Project ID interface works correctly ========================================================== 21:23:10 (1713489790) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1024 /mnt/lustre/d39.sanity-quota/project Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:) Stopping client oleg145-client.virtnet /mnt/lustre opts: Stopping clients: oleg145-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg145-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11825) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg145-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg145-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg145-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42115/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.201.45,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg145-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b02ee800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b02ee800.idle_timeout=debug Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 39 (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40a: Hard link across different project ID ========================================================== 21:24:19 (1713489859) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40a.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40a.sanity-quota/dir2 ln: failed to create hard link '/mnt/lustre/d40a.sanity-quota/dir2/1_link' => '/mnt/lustre/d40a.sanity-quota/dir1/1': Invalid cross-device link Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 40a (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40b: Mv across different project ID ========================================================== 21:24:49 (1713489889) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40b.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40b.sanity-quota/dir2 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 40b (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40c: Remote child Dir inherit project quota properly ========================================================== 21:25:19 (1713489919) SKIP: sanity-quota test_40c needs >= 2 MDTs SKIP 40c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40d: Stripe Directory inherit project quota properly ========================================================== 21:25:23 (1713489923) SKIP: sanity-quota test_40d needs >= 2 MDTs SKIP 40d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 41: df should return projid-specific values ========================================================== 21:25:26 (1713489926) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Waiting 90s for 'ugp' lfs project -sp 41000 /mnt/lustre/d41.sanity-quota/dir == global statfs: /mnt/lustre == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.201.145@tcp:/lustre 7542784 8192 7530496 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.201.145@tcp:/lustre 235836 380 235456 1% /mnt/lustre Disk quotas for prj 41000 (pid 41000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/d41.sanity-quota/dir 12 0 102400 - 1 0 4096 - == project statfs (prjid=41000): /mnt/lustre/d41.sanity-quota/dir == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.201.145@tcp:/lustre 102400 12 102388 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.201.145@tcp:/lustre 4096 1 4095 1% /mnt/lustre llite.lustre-ffff8800b02ee800.statfs_project=0 llite.lustre-ffff8800b02ee800.statfs_project=1 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 41 (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 48: lfs quota --delete should delete quota project ID ========================================================== 21:26:05 (1713489965) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.047504 s, 22.1 MB/s - id: 60000 osd-zfs - id: 60000 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0384521 s, 27.3 MB/s - id: 60000 cat: /proc/fs/lustre/osd-zfs/lustre-OST0000/quota_slave/limit_user: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0437504 s, 24.0 MB/s - id: 60000 osd-zfs - id: 60000 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.051772 s, 20.3 MB/s - id: 60000 cat: /proc/fs/lustre/osd-zfs/lustre-OST0000/quota_slave/limit_group: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0435525 s, 24.1 MB/s - id: 10000 osd-zfs - id: 10000 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0418734 s, 25.0 MB/s - id: 10000 cat: /proc/fs/lustre/osd-zfs/lustre-OST0000/quota_slave/limit_project: No such file or directory - id: 10000 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 48 (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 49: lfs quota -a prints the quota usage for all quota IDs ========================================================== 21:27:10 (1713490030) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 setquota for users and groups fail_loc=0xa09 lfs setquota: 1000 / 42 seconds fail_loc=0 903 0 0 102400 - 0 0 10240 - 904 0 0 102400 - 0 0 10240 - 905 0 0 102400 - 0 0 10240 - 906 0 0 102400 - 0 0 10240 - 907 0 0 102400 - 0 0 10240 - 908 0 0 102400 - 0 0 10240 - 909 0 0 102400 - 0 0 10240 - 910 0 0 102400 - 0 0 10240 - 911 0 0 102400 - 0 0 10240 - 912 0 0 102400 - 0 0 10240 - 913 0 0 102400 - 0 0 10240 - 914 0 0 102400 - 0 0 10240 - 915 0 0 102400 - 0 0 10240 - 916 0 0 102400 - 0 0 10240 - 917 0 0 102400 - 0 0 10240 - 918 0 0 102400 - 0 0 10240 - 919 0 0 102400 - 0 0 10240 - 920 0 0 102400 - 0 0 10240 - 921 0 0 102400 - 0 0 10240 - 922 0 0 102400 - 0 0 10240 - 923 0 0 102400 - 0 0 10240 - 924 0 0 102400 - 0 0 10240 - 925 0 0 102400 - 0 0 10240 - 926 0 0 102400 - 0 0 10240 - 927 0 0 102400 - 0 0 10240 - 928 0 0 102400 - 0 0 10240 - 929 0 0 102400 - 0 0 10240 - 930 0 0 102400 - 0 0 10240 - 931 0 0 102400 - 0 0 10240 - 932 0 0 102400 - 0 0 10240 - 933 0 0 102400 - 0 0 10240 - 934 0 0 102400 - 0 0 10240 - 935 0 0 102400 - 0 0 10240 - 936 0 0 102400 - 0 0 10240 - 937 0 0 102400 - 0 0 10240 - 938 0 0 102400 - 0 0 10240 - 939 0 0 102400 - 0 0 10240 - 940 0 0 102400 - 0 0 10240 - 941 0 0 102400 - 0 0 10240 - 942 0 0 102400 - 0 0 10240 - 943 0 0 102400 - 0 0 10240 - 944 0 0 102400 - 0 0 10240 - 945 0 0 102400 - 0 0 10240 - 946 0 0 102400 - 0 0 10240 - 947 0 0 102400 - 0 0 10240 - 948 0 0 102400 - 0 0 10240 - 949 0 0 102400 - 0 0 10240 - 950 0 0 102400 - 0 0 10240 - 951 0 0 102400 - 0 0 10240 - 952 0 0 102400 - 0 0 10240 - 953 0 0 102400 - 0 0 10240 - 954 0 0 102400 - 0 0 10240 - 955 0 0 102400 - 0 0 10240 - 956 0 0 102400 - 0 0 10240 - 957 0 0 102400 - 0 0 10240 - 958 0 0 102400 - 0 0 10240 - 959 0 0 102400 - 0 0 10240 - 960 0 0 102400 - 0 0 10240 - 961 0 0 102400 - 0 0 10240 - 962 0 0 102400 - 0 0 10240 - 963 0 0 102400 - 0 0 10240 - 964 0 0 102400 - 0 0 10240 - 965 0 0 102400 - 0 0 10240 - 966 0 0 102400 - 0 0 10240 - 967 0 0 102400 - 0 0 10240 - 968 0 0 102400 - 0 0 10240 - 969 0 0 102400 - 0 0 10240 - 970 0 0 102400 - 0 0 10240 - 971 0 0 102400 - 0 0 10240 - 972 0 0 102400 - 0 0 10240 - 973 0 0 102400 - 0 0 10240 - 974 0 0 102400 - 0 0 10240 - 975 0 0 102400 - 0 0 10240 - 976 0 0 102400 - 0 0 10240 - 977 0 0 102400 - 0 0 10240 - 978 0 0 102400 - 0 0 10240 - 979 0 0 102400 - 0 0 10240 - 980 0 0 102400 - 0 0 10240 - 981 0 0 102400 - 0 0 10240 - 982 0 0 102400 - 0 0 10240 - 983 0 0 102400 - 0 0 10240 - 984 0 0 102400 - 0 0 10240 - 985 0 0 102400 - 0 0 10240 - 986 0 0 102400 - 0 0 10240 - 987 0 0 102400 - 0 0 10240 - 988 0 0 102400 - 0 0 10240 - 989 0 0 102400 - 0 0 10240 - 990 0 0 102400 - 0 0 10240 - 991 0 0 102400 - 0 0 10240 - 992 0 0 102400 - 0 0 10240 - 993 0 0 102400 - 0 0 10240 - 994 0 0 102400 - 0 0 10240 - 995 0 0 102400 - 0 0 10240 - 996 0 0 102400 - 0 0 10240 - 997 0 0 102400 - 0 0 10240 - 998 0 0 102400 - 0 0 10240 - polkitd 0 0 102400 - 0 0 10240 - green 0 0 102400 - 0 0 10240 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all usr quota: 1000 / 0 seconds 903 0 0 204800 - 0 0 20480 - 904 0 0 204800 - 0 0 20480 - 905 0 0 204800 - 0 0 20480 - 906 0 0 204800 - 0 0 20480 - 907 0 0 204800 - 0 0 20480 - 908 0 0 204800 - 0 0 20480 - 909 0 0 204800 - 0 0 20480 - 910 0 0 204800 - 0 0 20480 - 911 0 0 204800 - 0 0 20480 - 912 0 0 204800 - 0 0 20480 - 913 0 0 204800 - 0 0 20480 - 914 0 0 204800 - 0 0 20480 - 915 0 0 204800 - 0 0 20480 - 916 0 0 204800 - 0 0 20480 - 917 0 0 204800 - 0 0 20480 - 918 0 0 204800 - 0 0 20480 - 919 0 0 204800 - 0 0 20480 - 920 0 0 204800 - 0 0 20480 - 921 0 0 204800 - 0 0 20480 - 922 0 0 204800 - 0 0 20480 - 923 0 0 204800 - 0 0 20480 - 924 0 0 204800 - 0 0 20480 - 925 0 0 204800 - 0 0 20480 - 926 0 0 204800 - 0 0 20480 - 927 0 0 204800 - 0 0 20480 - 928 0 0 204800 - 0 0 20480 - 929 0 0 204800 - 0 0 20480 - 930 0 0 204800 - 0 0 20480 - 931 0 0 204800 - 0 0 20480 - 932 0 0 204800 - 0 0 20480 - 933 0 0 204800 - 0 0 20480 - 934 0 0 204800 - 0 0 20480 - 935 0 0 204800 - 0 0 20480 - 936 0 0 204800 - 0 0 20480 - 937 0 0 204800 - 0 0 20480 - 938 0 0 204800 - 0 0 20480 - 939 0 0 204800 - 0 0 20480 - 940 0 0 204800 - 0 0 20480 - 941 0 0 204800 - 0 0 20480 - 942 0 0 204800 - 0 0 20480 - 943 0 0 204800 - 0 0 20480 - 944 0 0 204800 - 0 0 20480 - 945 0 0 204800 - 0 0 20480 - 946 0 0 204800 - 0 0 20480 - 947 0 0 204800 - 0 0 20480 - 948 0 0 204800 - 0 0 20480 - 949 0 0 204800 - 0 0 20480 - 950 0 0 204800 - 0 0 20480 - 951 0 0 204800 - 0 0 20480 - 952 0 0 204800 - 0 0 20480 - 953 0 0 204800 - 0 0 20480 - 954 0 0 204800 - 0 0 20480 - 955 0 0 204800 - 0 0 20480 - 956 0 0 204800 - 0 0 20480 - 957 0 0 204800 - 0 0 20480 - 958 0 0 204800 - 0 0 20480 - 959 0 0 204800 - 0 0 20480 - 960 0 0 204800 - 0 0 20480 - 961 0 0 204800 - 0 0 20480 - 962 0 0 204800 - 0 0 20480 - 963 0 0 204800 - 0 0 20480 - 964 0 0 204800 - 0 0 20480 - 965 0 0 204800 - 0 0 20480 - 966 0 0 204800 - 0 0 20480 - 967 0 0 204800 - 0 0 20480 - 968 0 0 204800 - 0 0 20480 - 969 0 0 204800 - 0 0 20480 - 970 0 0 204800 - 0 0 20480 - 971 0 0 204800 - 0 0 20480 - 972 0 0 204800 - 0 0 20480 - 973 0 0 204800 - 0 0 20480 - 974 0 0 204800 - 0 0 20480 - 975 0 0 204800 - 0 0 20480 - 976 0 0 204800 - 0 0 20480 - 977 0 0 204800 - 0 0 20480 - 978 0 0 204800 - 0 0 20480 - 979 0 0 204800 - 0 0 20480 - 980 0 0 204800 - 0 0 20480 - 981 0 0 204800 - 0 0 20480 - 982 0 0 204800 - 0 0 20480 - 983 0 0 204800 - 0 0 20480 - 984 0 0 204800 - 0 0 20480 - 985 0 0 204800 - 0 0 20480 - 986 0 0 204800 - 0 0 20480 - 987 0 0 204800 - 0 0 20480 - 988 0 0 204800 - 0 0 20480 - 989 0 0 204800 - 0 0 20480 - 990 0 0 204800 - 0 0 20480 - 991 0 0 204800 - 0 0 20480 - 992 0 0 204800 - 0 0 20480 - 993 0 0 204800 - 0 0 20480 - 994 0 0 204800 - 0 0 20480 - systemd-network 0 0 204800 - 0 0 20480 - systemd-bus-proxy 0 0 204800 - 0 0 20480 - input 0 0 204800 - 0 0 20480 - polkitd 0 0 204800 - 0 0 20480 - ssh_keys 0 0 204800 - 0 0 20480 - green 0 0 204800 - 0 0 20480 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all grp quota: 1000 / 0 seconds Create 991 files... - open/close 636 (time 1713490098.37 total 10.01 last 63.55) total: 991 open/close in 15.47 seconds: 64.05 ops/second 951 6 0 102400 - 1 0 10240 - 952 6 0 102400 - 1 0 10240 - 953 6 0 102400 - 1 0 10240 - 954 6 0 102400 - 1 0 10240 - 955 6 0 102400 - 1 0 10240 - 956 6 0 102400 - 1 0 10240 - 957 6 0 102400 - 1 0 10240 - 958 6 0 102400 - 1 0 10240 - 959 6 0 102400 - 1 0 10240 - 960 6 0 102400 - 1 0 10240 - 961 6 0 102400 - 1 0 10240 - 962 6 0 102400 - 1 0 10240 - 963 6 0 102400 - 1 0 10240 - 964 6 0 102400 - 1 0 10240 - 965 6 0 102400 - 1 0 10240 - 966 6 0 102400 - 1 0 10240 - 967 6 0 102400 - 1 0 10240 - 968 6 0 102400 - 1 0 10240 - 969 6 0 102400 - 1 0 10240 - 970 6 0 102400 - 1 0 10240 - 971 6 0 102400 - 1 0 10240 - 972 6 0 102400 - 1 0 10240 - 973 6 0 102400 - 1 0 10240 - 974 6 0 102400 - 1 0 10240 - 975 6 0 102400 - 1 0 10240 - 976 6 0 102400 - 1 0 10240 - 977 6 0 102400 - 1 0 10240 - 978 6 0 102400 - 1 0 10240 - 979 6 0 102400 - 1 0 10240 - 980 6 0 102400 - 1 0 10240 - 981 6 0 102400 - 1 0 10240 - 982 6 0 102400 - 1 0 10240 - 983 6 0 102400 - 1 0 10240 - 984 6 0 102400 - 1 0 10240 - 985 6 0 102400 - 1 0 10240 - 986 6 0 102400 - 1 0 10240 - 987 6 0 102400 - 1 0 10240 - 988 6 0 102400 - 1 0 10240 - 989 6 0 102400 - 1 0 10240 - 990 6 0 102400 - 1 0 10240 - 991 6 0 102400 - 1 0 10240 - 992 6 0 102400 - 1 0 10240 - 993 6 0 102400 - 1 0 10240 - 994 6 0 102400 - 1 0 10240 - 995 6 0 102400 - 1 0 10240 - 996 6 0 102400 - 1 0 10240 - 997 6 0 102400 - 1 0 10240 - 998 6 0 102400 - 1 0 10240 - polkitd 6 0 102400 - 1 0 10240 - green 6 0 102400 - 1 0 10240 - time=0, rate=991/0 951 6 0 204800 - 1 0 20480 - 952 6 0 204800 - 1 0 20480 - 953 6 0 204800 - 1 0 20480 - 954 6 0 204800 - 1 0 20480 - 955 6 0 204800 - 1 0 20480 - 956 6 0 204800 - 1 0 20480 - 957 6 0 204800 - 1 0 20480 - 958 6 0 204800 - 1 0 20480 - 959 6 0 204800 - 1 0 20480 - 960 6 0 204800 - 1 0 20480 - 961 6 0 204800 - 1 0 20480 - 962 6 0 204800 - 1 0 20480 - 963 6 0 204800 - 1 0 20480 - 964 6 0 204800 - 1 0 20480 - 965 6 0 204800 - 1 0 20480 - 966 6 0 204800 - 1 0 20480 - 967 6 0 204800 - 1 0 20480 - 968 6 0 204800 - 1 0 20480 - 969 6 0 204800 - 1 0 20480 - 970 6 0 204800 - 1 0 20480 - 971 6 0 204800 - 1 0 20480 - 972 6 0 204800 - 1 0 20480 - 973 6 0 204800 - 1 0 20480 - 974 6 0 204800 - 1 0 20480 - 975 6 0 204800 - 1 0 20480 - 976 6 0 204800 - 1 0 20480 - 977 6 0 204800 - 1 0 20480 - 978 6 0 204800 - 1 0 20480 - 979 6 0 204800 - 1 0 20480 - 980 6 0 204800 - 1 0 20480 - 981 6 0 204800 - 1 0 20480 - 982 6 0 204800 - 1 0 20480 - 983 6 0 204800 - 1 0 20480 - 984 6 0 204800 - 1 0 20480 - 985 6 0 204800 - 1 0 20480 - 986 6 0 204800 - 1 0 20480 - 987 6 0 204800 - 1 0 20480 - 988 6 0 204800 - 1 0 20480 - 989 6 0 204800 - 1 0 20480 - 990 6 0 204800 - 1 0 20480 - 991 6 0 204800 - 1 0 20480 - 992 6 0 204800 - 1 0 20480 - 993 6 0 204800 - 1 0 20480 - 994 6 0 204800 - 1 0 20480 - systemd-network 6 0 204800 - 1 0 20480 - systemd-bus-proxy 6 0 204800 - 1 0 20480 - input 6 0 204800 - 1 0 20480 - polkitd 6 0 204800 - 1 0 20480 - ssh_keys 6 0 204800 - 1 0 20480 - green 6 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713490114 ; total 0 ; last 0) total: 991 unlinks in 4 seconds: 247.750000 unlinks/second Create 991 files... - open/close 710 (time 1713490137.30 total 10.01 last 70.93) total: 991 open/close in 14.11 seconds: 70.22 ops/second 951 6 0 102400 - 1 0 10240 - 952 6 0 102400 - 1 0 10240 - 953 6 0 102400 - 1 0 10240 - 954 6 0 102400 - 1 0 10240 - 955 6 0 102400 - 1 0 10240 - 956 6 0 102400 - 1 0 10240 - 957 6 0 102400 - 1 0 10240 - 958 6 0 102400 - 1 0 10240 - 959 6 0 102400 - 1 0 10240 - 960 6 0 102400 - 1 0 10240 - 961 6 0 102400 - 1 0 10240 - 962 6 0 102400 - 1 0 10240 - 963 6 0 102400 - 1 0 10240 - 964 6 0 102400 - 1 0 10240 - 965 6 0 102400 - 1 0 10240 - 966 6 0 102400 - 1 0 10240 - 967 6 0 102400 - 1 0 10240 - 968 6 0 102400 - 1 0 10240 - 969 6 0 102400 - 1 0 10240 - 970 6 0 102400 - 1 0 10240 - 971 6 0 102400 - 1 0 10240 - 972 6 0 102400 - 1 0 10240 - 973 6 0 102400 - 1 0 10240 - 974 6 0 102400 - 1 0 10240 - 975 6 0 102400 - 1 0 10240 - 976 6 0 102400 - 1 0 10240 - 977 6 0 102400 - 1 0 10240 - 978 6 0 102400 - 1 0 10240 - 979 6 0 102400 - 1 0 10240 - 980 6 0 102400 - 1 0 10240 - 981 6 0 102400 - 1 0 10240 - 982 6 0 102400 - 1 0 10240 - 983 6 0 102400 - 1 0 10240 - 984 6 0 102400 - 1 0 10240 - 985 6 0 102400 - 1 0 10240 - 986 6 0 102400 - 1 0 10240 - 987 6 0 102400 - 1 0 10240 - 988 6 0 102400 - 1 0 10240 - 989 6 0 102400 - 1 0 10240 - 990 6 0 102400 - 1 0 10240 - 991 6 0 102400 - 1 0 10240 - 992 6 0 102400 - 1 0 10240 - 993 6 0 102400 - 1 0 10240 - 994 6 0 102400 - 1 0 10240 - 995 6 0 102400 - 1 0 10240 - 996 6 0 102400 - 1 0 10240 - 997 6 0 102400 - 1 0 10240 - 998 6 0 102400 - 1 0 10240 - polkitd 6 0 102400 - 1 0 10240 - green 6 0 102400 - 1 0 10240 - time=0, rate=991/0 951 6 0 204800 - 1 0 20480 - 952 6 0 204800 - 1 0 20480 - 953 6 0 204800 - 1 0 20480 - 954 6 0 204800 - 1 0 20480 - 955 6 0 204800 - 1 0 20480 - 956 6 0 204800 - 1 0 20480 - 957 6 0 204800 - 1 0 20480 - 958 6 0 204800 - 1 0 20480 - 959 6 0 204800 - 1 0 20480 - 960 6 0 204800 - 1 0 20480 - 961 6 0 204800 - 1 0 20480 - 962 6 0 204800 - 1 0 20480 - 963 6 0 204800 - 1 0 20480 - 964 6 0 204800 - 1 0 20480 - 965 6 0 204800 - 1 0 20480 - 966 6 0 204800 - 1 0 20480 - 967 6 0 204800 - 1 0 20480 - 968 6 0 204800 - 1 0 20480 - 969 6 0 204800 - 1 0 20480 - 970 6 0 204800 - 1 0 20480 - 971 6 0 204800 - 1 0 20480 - 972 6 0 204800 - 1 0 20480 - 973 6 0 204800 - 1 0 20480 - 974 6 0 204800 - 1 0 20480 - 975 6 0 204800 - 1 0 20480 - 976 6 0 204800 - 1 0 20480 - 977 6 0 204800 - 1 0 20480 - 978 6 0 204800 - 1 0 20480 - 979 6 0 204800 - 1 0 20480 - 980 6 0 204800 - 1 0 20480 - 981 6 0 204800 - 1 0 20480 - 982 6 0 204800 - 1 0 20480 - 983 6 0 204800 - 1 0 20480 - 984 6 0 204800 - 1 0 20480 - 985 6 0 204800 - 1 0 20480 - 986 6 0 204800 - 1 0 20480 - 987 6 0 204800 - 1 0 20480 - 988 6 0 204800 - 1 0 20480 - 989 6 0 204800 - 1 0 20480 - 990 6 0 204800 - 1 0 20480 - 991 6 0 204800 - 1 0 20480 - 992 6 0 204800 - 1 0 20480 - 993 6 0 204800 - 1 0 20480 - 994 6 0 204800 - 1 0 20480 - systemd-network 6 0 204800 - 1 0 20480 - systemd-bus-proxy 6 0 204800 - 1 0 20480 - input 6 0 204800 - 1 0 20480 - polkitd 6 0 204800 - 1 0 20480 - ssh_keys 6 0 204800 - 1 0 20480 - green 6 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713490152 ; total 0 ; last 0) total: 991 unlinks in 4 seconds: 247.750000 unlinks/second fail_loc=0xa08 fail_loc=0 Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg145-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg145-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg145-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg145-server oleg145-server: oleg145-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions oleg145-server: libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory Formatting mgs, mds, osts Format mds1: lustre-mdt1/mdt1 Format ost1: lustre-ost1/ost1 Format ost2: lustre-ost2/ost2 Checking servers environments Checking clients oleg145-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions oleg145-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1/mdt1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Commit the device label on lustre-ost1/ost1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Commit the device label on lustre-ost2/ost2 Started lustre-OST0001 Starting client: oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012b212800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012b212800.idle_timeout=debug Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 49 (260s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 50: Test if lfs find --projid works ========================================================== 21:31:32 (1713490292) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d50.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d50.sanity-quota/dir2 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 50 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 51: Test project accounting with mv/cp ========================================================== 21:32:01 (1713490321) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d51.sanity-quota/dir 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0142449 s, 73.6 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 51 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 52: Rename normal file across project ID ========================================================== 21:32:37 (1713490357) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.74072 s, 142 MB/s Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102417 0 0 - 2 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 12 0 0 - 1 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting rename '/mnt/lustre/d52.sanity-quota/t52_dir1' returned -1: Invalid cross-device link rename directory return 255 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 12 0 0 - 1 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102417 0 0 - 2 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 52 (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 53: Project inherit attribute could be cleared ========================================================== 21:33:14 (1713490394) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -s /mnt/lustre/d53.sanity-quota/dir lfs project -C /mnt/lustre/d53.sanity-quota/dir Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 53 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 54: basic lfs project interface test ========================================================== 21:33:32 (1713490412) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d54.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d54.sanity-quota/f54.sanity-quota-0] [100] total: 100 create in 0.11 seconds: 930.11 ops/second lfs project -rCk /mnt/lustre/d54.sanity-quota lfs project -rC /mnt/lustre/d54.sanity-quota - unlinked 0 (time 1713490420 ; total 0 ; last 0) total: 100 unlinks in 1 seconds: 100.000000 unlinks/second Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 54 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 55: Chgrp should be affected by group quota ========================================================== 21:33:52 (1713490432) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d55.sanity-quota/f55.sanity-quota] [bs=1024] [count=100000] 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 13.4425 s, 7.6 MB/s Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 51200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] chgrp: changing group of '/mnt/lustre/d55.sanity-quota/f55.sanity-quota': Disk quota exceeded 0 Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 16384 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 114688 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 55 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 56: lfs quota -t should work well === 21:34:41 (1713490481) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 56 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 57: lfs project could tolerate errors ========================================================== 21:35:02 (1713490502) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 57 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 58: project ID should be kept for new mirrors created by FID ========================================================== 21:35:31 (1713490531) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] test by mirror created with normal file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.57653 s, 20.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.56343 s, 20.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete test by mirror created with FID running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.28169 s, 23.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.58786 s, 19.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 58 (74s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 59: lfs project dosen't crash kernel with project disabled ========================================================== 21:36:47 (1713490607) SKIP: sanity-quota test_59 ldiskfs only test SKIP 59 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 60: Test quota for root with setgid ========================================================== 21:36:51 (1713490611) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' lfs setquota: warning: inode hardlimit '100' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 100 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d60.sanity-quota/f60.sanity-quota] [99] total: 99 create in 0.25 seconds: 397.44 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] touch: cannot touch '/mnt/lustre/d60.sanity-quota/foo': Disk quota exceeded running as uid/gid/euid/egid 0/0/0/0, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 60 (28s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_61 skipping SLOW test 61 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 62: Project inherit should be only changed by root ========================================================== 21:37:22 (1713490642) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [-p] [/mnt/lustre/d62.sanity-quota/] lfs project -s /mnt/lustre/d62.sanity-quota/ running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [chattr] [-P] [/mnt/lustre/d62.sanity-quota/] chattr: Operation not permitted while setting flags on /mnt/lustre/d62.sanity-quota/ Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 62 (15s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_63 skipping excluded test 63 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 64: lfs project on non dir/files should succeed ========================================================== 21:37:40 (1713490660) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 64 (28s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_65 skipping excluded test 65 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 66: nonroot user can not change project state in default ========================================================== 21:38:11 (1713490691) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 mdt.lustre-MDT0000.enable_chprojid_gid=0 lfs project -sp 1000 /mnt/lustre/d66.sanity-quota/foo running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [0] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-C] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted lfs project -C /mnt/lustre/d66.sanity-quota/foo/foo mdt.lustre-MDT0000.enable_chprojid_gid=-1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-rC] [/mnt/lustre/d66.sanity-quota/foo/] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/bar] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/bar': Operation not permitted lfs project -p 1000 /mnt/lustre/d66.sanity-quota/foo/bar mdt.lustre-MDT0000.enable_chprojid_gid=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 66 (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 67: quota pools recalculation ======= 21:38:41 (1713490721) SKIP: sanity-quota test_67 ZFS grants some block space together with inode SKIP 67 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 68: slave number in quota pool changed after each add/remove OST ========================================================== 21:38:45 (1713490725) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 nr result 3 Creating new pool oleg145-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Adding targets to pool oleg145-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Updated after 3s: want 'lustre-OST0000_UUID lustre-OST0001_UUID ' got 'lustre-OST0000_UUID lustre-OST0001_UUID ' Removing lustre-OST0000_UUID from qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Removing lustre-OST0001_UUID from qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 68 (41s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 69: EDQUOT at one of pools shouldn't affect DOM ========================================================== 21:39:28 (1713490768) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Creating new pool oleg145-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 User quota (block hardlimit:200 MB) User quota (block hardlimit:10 MB) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 5.87897 s, 89.2 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 7.22135 s, 72.6 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.530014 s, 19.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0720722 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 7.00821 s, 74.8 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 8.38321 s, 62.5 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 69 (72s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70a: check lfs setquota/quota with a pool option ========================================================== 21:40:42 (1713490842) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 hard limit 20480 limit 20 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 20480 - 0 0 0 - Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 70a (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70b: lfs setquota pool works properly ========================================================== 21:41:14 (1713490874) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed PASS 70b (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71a: Check PFL with quota pools ===== 21:41:33 (1713490893) SKIP: sanity-quota test_71a ZFS grants some block space together with inode SKIP 71a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71b: Check SEL with quota pools ===== 21:41:37 (1713490897) SKIP: sanity-quota test_71b ZFS grants some block space together with inode SKIP 71b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 72: lfs quota --pool prints only pool's OSTs ========================================================== 21:41:39 (1713490899) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:50 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.298899 s, 17.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.185988 s, 28.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0695801 s, 0.0 kB/s used 10240 Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 72 (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73a: default limits at OST Pool Quotas ========================================================== 21:42:30 (1713490950) Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' LIMIT=20480 TESTFILE=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0 qdtype=-U qh=-B qid=quota_usr qprjid=1000 qres_type=data qs=-b qtype=-u sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 set to use default quota lfs setquota: '-d' deprecated, use '-D' or '--default' set default quota get default quota Disk default usr quota: Filesystem bquota blimit bgrace iquota ilimit igrace /mnt/lustre 0 0 10 0 0 10 Test not out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=10] [oflag=sync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.970049 s, 10.8 MB/s Test out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 5.00534 s, 3.8 MB/s Increase default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 3.9486 s, 10.6 MB/s Set quota to override default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 2.11056 s, 8.9 MB/s Set to use default quota again lfs setquota: '-d' deprecated, use '-D' or '--default' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 3.78897 s, 11.1 MB/s Cleanup sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' PASS 73a (95s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73b: default OST Pool Quotas limit for new user ========================================================== 21:44:08 (1713491048) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' set default quota for qpool1 Write from user that hasn't lqe running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73b.sanity-quota/f73b.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.346491 s, 30.3 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 73b (46s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 74: check quota pools per user ====== 21:44:56 (1713491096) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg145-server: Pool lustre.qpool2 created Adding targets to pool oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 pool limit for qpool1 10240 pool limit for qpool2 51200 Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg145-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 74 (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 75: nodemap squashed root respects quota enforcement ========================================================== 21:45:38 (1713491138) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 On MGS 192.168.201.145, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.145, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.145, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.145, default.squash_uid = nodemap.default.squash_uid=60000 waiting 10 secs for sync 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.404152 s, 25.9 MB/s Write to exceed soft limit 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.221974 s, 46.1 kB/s mmap write when over soft limit sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Write... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.406741 s, 25.8 MB/s Write out of block quota ... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.483513 s, 21.7 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/f75.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0748566 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0946447 s, 11.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0971482 s, 10.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0968904 s, 10.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0981421 s, 10.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0960315 s, 10.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.095665 s, 11.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0596633 s, 17.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0958713 s, 10.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0964396 s, 10.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0983611 s, 10.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0937159 s, 11.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0950933 s, 11.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0940607 s, 11.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0936693 s, 11.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0997101 s, 10.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.10556 s, 9.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.102935 s, 10.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.10113 s, 10.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0977792 s, 10.7 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0713413 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-20': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0758187 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-21': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0754396 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-22': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0736873 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-23': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0729775 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-24': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0793642 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-25': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0715945 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-26': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0662113 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-27': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0733717 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-28': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0726731 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-29': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0750081 s, 0.0 kB/s 9+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.42925 s, 22.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0987388 s, 10.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0908433 s, 11.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0913729 s, 11.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0894163 s, 11.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.09409 s, 11.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0927806 s, 11.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0939824 s, 11.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0931438 s, 11.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.094149 s, 11.1 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-9': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0718605 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-10': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0702166 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-11': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0706467 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-12': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0694837 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-13': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0703318 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-14': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0700413 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-15': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0711363 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-16': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0768164 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-17': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0764151 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-18': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0749839 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0733721 s, 0.0 kB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.100762 s, 10.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0963277 s, 10.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.097207 s, 10.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.103386 s, 10.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.101166 s, 10.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0976039 s, 10.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0993652 s, 10.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.103118 s, 10.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0966905 s, 10.8 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/file': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.552359 s, 17.1 MB/s On MGS 192.168.201.145, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.145, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.145, active = nodemap.active=0 waiting 10 secs for sync Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 75 (173s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 76: project ID 4294967295 should be not allowed ========================================================== 21:48:33 (1713491313) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Invalid project ID: 4294967295 Change or list project attribute for specified file or directory. usage: project [-d|-r] list project ID and flags on file(s) or directories project [-p id] [-s] [-r] set project ID and/or inherit flag for specified file(s) or directories project -c [-d|-r [-p id] [-0]] check project ID and flags on file(s) or directories, print outliers project -C [-d|-r] [-k] clear the project inherit flag and ID on the file or directory Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 76 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 77: lfs setquota should fail in Lustre mount with 'ro' ========================================================== 21:49:02 (1713491342) Starting client: oleg145-client.virtnet: -o ro oleg145-server@tcp:/lustre /mnt/lustre2 lfs setquota: quotactl failed: Read-only file system setquota failed: Read-only file system PASS 77 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78A: Check fallocate increase quota usage ========================================================== 21:49:06 (1713491346) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity-quota test_78A need >= 2.13.57 and ldiskfs for fallocate SKIP 78A (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78a: Check fallocate increase projectid usage ========================================================== 21:49:10 (1713491350) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity-quota test_78a need >= 2.13.57 and ldiskfs for fallocate SKIP 78a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 79: access to non-existed dt-pool/info doesn't cause a panic ========================================================== 21:49:13 (1713491353) /tmp/f79.sanity-quota Creating new pool oleg145-server: Pool lustre.qpool1 created Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed PASS 79 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 80: check for EDQUOT after OST failover ========================================================== 21:49:25 (1713491365) SKIP: sanity-quota test_80 ZFS grants some block space together with inode SKIP 80 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 81: Race qmt_start_pool_recalc with qmt_pool_free ========================================================== 21:49:28 (1713491368) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg145-server: Pool lustre.qpool1 created Waiting 90s for '' fail_loc=0x80000A07 fail_val=10 Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Stopping /mnt/lustre-mds1 (opts:-f) on oleg145-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg145-server: oleg145-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg145-client: oleg145-server: ssh exited with exit code 1 Started lustre-MDT0000 Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 81 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 82: verify more than 8 qids for single operation ========================================================== 21:50:17 (1713491417) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 82 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 83: Setting default quota shouldn't affect grace time ========================================================== 21:50:37 (1713491437) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 83 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 84: Reset quota should fix the insane granted quota ========================================================== 21:50:56 (1713491456) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg145-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10485760 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 osd-zfs.lustre-OST0000.quota_slave.force_reint=1 0 /mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 66 0x42 0x240000400 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=60] [conv=nocreat] [oflag=direct] 60+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 3.12644 s, 20.1 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 10485760 - 2 0 0 - lustre-MDT0000_UUID 13 - 0 - 2 - 0 - lustre-OST0000_UUID 61445 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 fail_val=0 fail_loc=0xa08 fail_val=0 fail_loc=0xa08 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 0 - 2 0 0 - lustre-MDT0000_UUID 13 - 0 - 2 - 0 - lustre-OST0000_UUID 61445 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 fail_val=0 fail_loc=0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 0 - 2 0 0 - lustre-MDT0000_UUID 13 - 0 - 2 - 0 - lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 102400 - 2 0 0 - lustre-MDT0000_UUID 13* - 13 - 2 - 0 - lustre-OST0000_UUID 61445* - 61445 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 61445 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] dd: error writing '/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1': Disk quota exceeded 100+0 records in 99+0 records out 103809024 bytes (104 MB) copied, 3.06301 s, 33.9 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 101395 0 307200 - 2 0 0 - lustre-MDT0000_UUID 13* - 13 - 2 - 0 - lustre-OST0000_UUID 101382 - 102387 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 102387 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 8.90885 s, 23.5 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 84 (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 85: do not hung at write with the least_qunit ========================================================== 21:52:11 (1713491531) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg145-server: Pool lustre.qpool1 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg145-server: Pool lustre.qpool2 created Adding targets to pool oleg145-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Updated after 2s: want 'lustre-OST0000_UUID lustre-OST0001_UUID ' got 'lustre-OST0000_UUID lustre-OST0001_UUID ' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0] [count=10] dd: error writing '/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0': Disk quota exceeded 3+0 records in 2+0 records out 2269184 bytes (2.3 MB) copied, 0.189003 s, 12.0 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg145-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' lustre.qpool2 oleg145-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg145-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg145-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 85 (54s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 86: Pre-acquired quota should be released if quota is over limit ========================================================== 21:53:07 (1713491587) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2480 (time 1713491605.76 total 10.00 last 247.95) - create 4896 (time 1713491615.76 total 20.00 last 241.60) total: 5000 create in 20.46 seconds: 244.37 ops/second sleep 5 for ZFS zfs running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2429 (time 1713491678.88 total 10.00 last 242.88) - create 4849 (time 1713491688.89 total 20.01 last 241.83) total: 5000 create in 20.61 seconds: 242.57 ops/second sleep 5 for ZFS zfs running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second lfs project -sp 1000 /mnt/lustre/d86.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2425 (time 1713491749.81 total 10.00 last 242.48) - create 4918 (time 1713491759.81 total 20.00 last 249.27) total: 5000 create in 20.34 seconds: 245.87 ops/second sleep 5 for ZFS zfs running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 86 (231s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity-quota test complete, duration 6043 sec ========= 21:57:00 (1713491820) === sanity-quota: start cleanup 21:57:00 (1713491820) === === sanity-quota: finish cleanup 21:57:00 (1713491820) ===