-----============= acceptance-small: sanity-quota ============----- Fri Apr 19 08:49:40 EDT 2024 excepting tests: 2 4a 63 65 skipping tests SLOW=no: 61 12a 9 === sanity-quota: start setup 08:49:43 (1713530983) === oleg451-client.virtnet: executing check_config_client /mnt/lustre oleg451-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg451-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6076800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6076800.idle_timeout=debug oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all === sanity-quota: finish setup 08:49:51 (1713530991) === using SAVE_PROJECT_SUPPORTED=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [true] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d0_runas_test/f6751] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [true] running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [touch] [/mnt/lustre/d0_runas_test/f6751] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 0: Test basic quota performance ===== 08:50:02 (1713531002) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.340262 s, 30.8 MB/s Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d0.sanity-quota/f0.sanity-quota-0] [count=10] [conv=fsync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.291697 s, 35.9 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 0 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1a: Block hard limit (normal use and out of quota) ========================================================== 08:50:38 (1713531038) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.163397 s, 32.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 4722688 bytes (4.7 MB) copied, 0.194891 s, 24.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0787459 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:10 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.121464 s, 43.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.206026 s, 20.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.066627 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:10 mb) lfs project -p 1000 /mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.217934 s, 24.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.27405 s, 15.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1a.sanity-quota/f1a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0783828 s, 0.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1a (108s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1b: Quota pools: Block hard limit (normal use and out of quota) ========================================================== 08:52:28 (1713531148) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.140946 s, 37.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.147002 s, 28.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0673838 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Group quota (block hardlimit:20 MB) Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.138585 s, 37.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1': Disk quota exceeded 5+0 records in 4+0 records out 4919296 bytes (4.9 MB) copied, 0.133434 s, 36.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0597025 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.176337 s, 29.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.124874 s, 42.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1b.sanity-quota/f1b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0665635 s, 0.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1b (119s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1c: Quota pools: check 3 pools with hardlimit only for global ========================================================== 08:54:29 (1713531269) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Creating new pool oleg451-server: Pool lustre.qpool2 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.511579 s, 20.5 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.464322 s, 20.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0] [count=1] [seek=20] dd: error writing '/mnt/lustre/d1c.sanity-quota/f1c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0598518 s, 0.0 kB/s qpool1 used 19460 qpool2 used 19460 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' lustre.qpool2 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg451-server: Pool lustre.qpool2 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1c (74s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1d: Quota pools: check block hardlimit on different pools ========================================================== 08:55:45 (1713531345) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg451-server: Pool lustre.qpool2 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.218904 s, 24.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 5214208 bytes (5.2 MB) copied, 0.25825 s, 20.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1d.sanity-quota/f1d.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0762904 s, 0.0 kB/s qpool1 used 10244 qpool2 used 10244 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg451-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1d (75s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1e: Quota pools: global pool high block limit vs quota pool with small ========================================================== 08:57:02 (1713531422) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:53000000 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.294518 s, 17.8 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.224635 s, 23.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0669411 s, 0.0 kB/s Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1e.sanity-quota/f1e.sanity-quota-1] [count=20] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.503897 s, 41.6 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1e (59s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1f: Quota pools: correct qunit after removing/adding OST ========================================================== 08:58:03 (1713531483) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.247984 s, 21.1 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.214573 s, 24.4 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0751304 s, 0.0 kB/s Removing lustre-OST0000_UUID from qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.300747 s, 17.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.233372 s, 22.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0681763 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1f (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1g: Quota pools: Block hard limit with wide striping ========================================================== 08:59:18 (1713531558) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 osc.lustre-OST0000-osc-ffff8800b6076800.max_dirty_mb=1 osc.lustre-OST0001-osc-ffff8800b6076800.max_dirty_mb=1 User quota (block hardlimit:40 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.41046 s, 7.4 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 9+0 records in 8+0 records out 8491008 bytes (8.5 MB) copied, 1.27296 s, 6.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0] [count=6] [seek=20] dd: error writing '/mnt/lustre/d1g.sanity-quota/f1g.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1085440 bytes (1.1 MB) copied, 0.273693 s, 4.0 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed osc.lustre-OST0000-osc-ffff8800b6076800.max_dirty_mb=467 osc.lustre-OST0001-osc-ffff8800b6076800.max_dirty_mb=467 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1g (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1h: Block hard limit test using fallocate ========================================================== 09:00:27 (1713531627) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity-quota test_1h need >= 2.13.57 and ldiskfs for fallocate SKIP 1h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1i: Quota pools: different limit and usage relations ========================================================== 09:00:31 (1713531631) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.288861 s, 18.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.235869 s, 22.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0641029 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10244 0 0 - 1 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 10244* - 10244 - - - - - Total allocated inode limit: 0, total allocated block limit: 10244 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.191878 s, 27.3 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.163984 s, 32.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.130092 s, 40.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0334377 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-1] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.101192 s, 31.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [count=3] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 2+0 records in 1+0 records out 1507328 bytes (1.5 MB) copied, 0.0826604 s, 18.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2] [seek=3] [count=1] dd: error writing '/mnt/lustre/d1i.sanity-quota//f1i.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0313639 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1i (70s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 1j: Enable project quota enforcement for root ========================================================== 09:01:43 (1713531703) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 -------------------------------------- Project quota (block hardlimit:20 mb) lfs project -p 1000 /mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0 osd-zfs.lustre-OST0000.quota_slave.root_prj_enable=1 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.649096 s, 30.7 MB/s running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=10] [seek=20] [oflag=direct] dd: error writing '/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0225016 s, 0.0 kB/s osd-zfs.lustre-OST0000.quota_slave.root_prj_enable=0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1j.sanity-quota/f1j.sanity-quota-0] [count=20] [seek=20] [oflag=direct] 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.632253 s, 33.2 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete osd-zfs.lustre-OST0000.quota_slave.root_prj_enable=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 1j (38s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_2 skipping excluded test 2 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3a: Block soft limit (start timer, timer goes off, stop timer) ========================================================== 09:02:23 (1713531743) User quota (soft limit:4 MB grace:60 seconds) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.145639 s, 28.8 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.119703 s, 85.5 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5189* 4096 0 1m 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 6278 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5189 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.119 s, 86.1 kB/s Grace time is 1m Sleep through grace ... ...sleep 65 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 7302 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0390263 s, 26.2 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00439281 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 7302 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 4096 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 130 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.229711 s, 18.3 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Group quota (soft limit:4 MB grace:60 seconds) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.271157 s, 15.5 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.272771 s, 37.5 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129* 4096 0 1m 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00652833 s, 1.6 MB/s Grace time is 1m Sleep through grace ... ...sleep 65 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0103371 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00470533 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215* 4096 0 expired 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 4096 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 66 - 1090 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1090 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.156274 s, 26.8 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Project quota (soft limit:4 MB grace:60 sec) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.291727 s, 14.4 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.236746 s, 43.3 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 6 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 6 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124* 4096 0 59s 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.273381 s, 37.5 kB/s Grace time is 59s Sleep through grace ... ...sleep 64 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6149* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0336129 s, 30.5 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00785218 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7173* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 4096 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w Block grace time: 1m; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3a.sanity-quota/f3a.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.246128 s, 17.0 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 3a (364s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3b: Quota pools: Block soft limit (start timer, expires, stop timer) ========================================================== 09:08:29 (1713532109) limit 4 glbl_limit 8 grace 60 glbl_grace 120 User quota in qpool1(soft limit:4 MB grace:60 seconds) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.102655 s, 40.9 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.272114 s, 37.6 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5129 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.00631847 s, 1.6 MB/s Quota info for qpool1: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5129* 4096 0 59s 0 0 0 - Grace time is 59s Sleep through grace ... ...sleep 64 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00940314 s, 0.0 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00747038 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213* - 6213 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6213 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 8192 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 66 - 1090 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1090 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-0] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.214669 s, 19.5 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Group quota in qpool1(soft limit:4 MB grace:60 seconds) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.172031 s, 24.4 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.274239 s, 37.3 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 0 - 41 - - - - - Total allocated inode limit: 0, total allocated block limit: 6189 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.230208 s, 44.5 kB/s Quota info for qpool1: Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6148* 4096 0 59s 0 0 0 - Grace time is 59s Sleep through grace ... ...sleep 64 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 66* - 66 - - - - - Total allocated inode limit: 0, total allocated block limit: 7238 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0240487 s, 42.6 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00774279 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 8192 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 66* - 66 - - - - - Total allocated inode limit: 0, total allocated block limit: 7238 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 8192 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66* - 66 - - - - - Total allocated inode limit: 0, total allocated block limit: 66 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-1] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.198876 s, 21.1 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Project quota in qpool1(soft:4 MB grace:60 sec) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.170722 s, 24.6 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=4096] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.234506 s, 43.7 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 5124 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 5124 - 6148 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 6148 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=5120] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.226468 s, 45.2 kB/s Quota info for qpool1: Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6148* 4096 0 1m 0 0 0 - Grace time is 1m Sleep through grace ... ...sleep 65 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6215 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 6213 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 6149 8192 0 - 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 6148 - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=4096] [seek=6144] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0399807 s, 25.6 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [bs=1K] [count=10] [seek=10240] dd: error writing '/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00808422 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7237 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7239 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 7237 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 7173 8192 0 - 1 0 0 - lustre-MDT0000_UUID 2 - 0 - 1 - 0 - lustre-OST0000_UUID 7172* - 7172 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 7172 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 66 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 8192 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w Block grace time: 2m; Inode grace time: 1w lfs project -p 1000 /mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2 Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3b.sanity-quota/f3b.sanity-quota-2] [count=4] 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.202622 s, 20.7 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed PASS 3b (376s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 3c: Quota pools: check block soft limit on different pools ========================================================== 09:14:47 (1713532487) limit 4 limit2 8 glbl_limit 12 grace1 70 grace2 60 glbl_grace 80 User quota in qpool2(soft:8 MB grace:60 seconds) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Creating new pool oleg451-server: Pool lustre.qpool2 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.372749 s, 22.5 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=8192] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.240625 s, 42.6 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9285 12288 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9220 - 10244 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 10374 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9285 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9220 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=9216] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.226436 s, 45.2 kB/s Quota info for qpool2: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10309* 8192 0 59s 0 0 0 - Grace time is 59s Sleep through grace ... ...sleep 64 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10311 12288 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 10244 - 11268 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 11398 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10311 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 10244 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=4096] [seek=10240] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0335062 s, 30.6 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=14336] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00753403 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11335 12288 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 11268* - 11268 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 11398 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11335 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 11268 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 12288 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 130 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.461761 s, 18.2 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Updated after 2s: want 'foo' got 'foo' lustre.qpool2 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg451-server: Pool lustre.qpool2 destroyed PASS 3c (151s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_4a skipping excluded test 4a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 4b: Grace time strings handling ===== 09:17:21 (1713532641) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Valid grace strings test Block grace time: 1w3d; Inode grace time: 16m40s Block grace time: 5s; Inode grace time: 1w2d3h4m5s Invalid grace strings test lfs: bad inode-grace: 5c setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: 18446744073709551615 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM lfs: bad inode-grace: -1 setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 4b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 5: Chown & chgrp successfully even out of block/file quota ========================================================== 09:17:31 (1713532651) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Set quota limit (0 10M 0 10) for quota_usr.quota_usr lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Create more than 10 files and more than 10 MB ... total: 11 create in 0.04 seconds: 254.56 ops/second lfs project -p 1000 /mnt/lustre/d5.sanity-quota/f5.sanity-quota-0_1 11+0 records in 11+0 records out 11534336 bytes (12 MB) copied, 0.210173 s, 54.9 MB/s Chown files to quota_usr.quota_usr ... - unlinked 0 (time 1713532667 ; total 0 ; last 0) total: 11 unlinks in 1 seconds: 11.000000 unlinks/second Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 5 (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 6: Test dropping acquire request on master ========================================================== 09:18:11 (1713532691) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0913607 s, 11.5 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0395497 s, 26.5 MB/s at_max=20 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 3+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.213926 s, 9.8 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete fail_val=601 fail_loc=0x513 osd-zfs.lustre-OST0000.quota_slave.timeout=10 osd-zfs.lustre-OST0001.quota_slave.timeout=10 running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_2usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.30592 s, 10.3 MB/s Sleep for 41 seconds ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr] [count=3] [seek=1] [oflag=sync] [conv=notrunc] at_max=600 fail_val=0 fail_loc=0 dd: error writing '/mnt/lustre/d6.sanity-quota/f6.sanity-quota-quota_usr': Disk quota exceeded 3+0 records in 2+0 records out 3129344 bytes (3.1 MB) copied, 56.5665 s, 55.3 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 6 (108s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7a: Quota reintegration (global index) ========================================================== 09:20:01 (1713532801) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg451-server Enable quota & set quota limit for quota_usr Waiting 90s for 'ugp' Start ost1... Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.400829 s, 10.5 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg451-server Start ost1... Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7a.sanity-quota/f7a.sanity-quota] [count=6] [oflag=sync] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.622509 s, 10.1 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7a (91s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7b: Quota reintegration (slave index) ========================================================== 09:21:35 (1713532895) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.123994 s, 8.5 MB/s fail_val=0 fail_loc=0xa02 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7b.sanity-quota/f7b.sanity-quota] [count=1] [seek=1] [oflag=sync] [conv=notrunc] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.101959 s, 10.3 MB/s fail_val=0 fail_loc=0 Restart ost to trigger reintegration... Stopping /mnt/lustre-ost1 (opts:) on oleg451-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7b (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7c: Quota reintegration (restart mds during reintegration) ========================================================== 09:22:40 (1713532960) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' fail_val=0 fail_loc=0xa03 Waiting 90s for 'ugp' osd-zfs.lustre-OST0000.quota_slave.force_reint=1 osd-zfs.lustre-OST0001.quota_slave.force_reint=1 Stop mds... Stopping /mnt/lustre-mds1 (opts:) on oleg451-server fail_val=0 fail_loc=0 Start mds... Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE Waiting 200s for 'glb[1],slv[1],reint[0]' Waiting 190s for 'glb[1],slv[1],reint[0]' Waiting 180s for 'glb[1],slv[1],reint[0]' Waiting 160s for 'glb[1],slv[1],reint[0]' Waiting 150s for 'glb[1],slv[1],reint[0]' Waiting 140s for 'glb[1],slv[1],reint[0]' Waiting 130s for 'glb[1],slv[1],reint[0]' Waiting 110s for 'glb[1],slv[1],reint[0]' Waiting 100s for 'glb[1],slv[1],reint[0]' Updated after 109s: want 'glb[1],slv[1],reint[0]' got 'glb[1],slv[1],reint[0]' affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota] [count=6] [oflag=sync] dd: error writing '/mnt/lustre/d7c.sanity-quota/f7c.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.456714 s, 9.2 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7c (156s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7d: Quota reintegration (Transfer index in multiple bulks) ========================================================== 09:25:19 (1713533119) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'none' fail_val=0 fail_loc=0x608 Waiting 90s for 'u' Updated after 3s: want 'u' got 'u' affected facets: ost1 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg451-server: *.lustre-OST0000.recovery_status status: COMPLETE affected facets: ost2 oleg451-server: oleg451-server.virtnet: executing _wait_recovery_complete *.lustre-OST0001.recovery_status 1475 oleg451-server: *.lustre-OST0001.recovery_status status: INACTIVE fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 1.89481 s, 10.0 MB/s running as uid/gid/euid/egid 60001/60001/60001/60001, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1] [count=21] [oflag=sync] dd: error writing '/mnt/lustre/d7d.sanity-quota/f7d.sanity-quota-1': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 1.84281 s, 10.2 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 7d (43s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 7e: Quota reintegration (inode limits) ========================================================== 09:26:04 (1713533164) SKIP: sanity-quota test_7e needs >= 2 MDTs SKIP 7e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 8: Run dbench with quota enabled ==== 09:26:07 (1713533167) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set enough high limit for user: quota_usr Set enough high limit for group: quota_usr lfs project -sp 1000 /mnt/lustre/d8.sanity-quota Set enough high limit for project: 1000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [bash] [rundbench] [-D] [/mnt/lustre/d8.sanity-quota] [3] [-t] [120] looking for dbench program /usr/bin/dbench found dbench client file /usr/share/dbench/client.txt '/usr/share/dbench/client.txt' -> 'client.txt' running 'dbench 3 -t 120' on /mnt/lustre/d8.sanity-quota at Fri Apr 19 09:26:18 EDT 2024 waiting for dbench pid 16611 dbench version 4.00 - Copyright Andrew Tridgell 1999-2004 Running for 120 seconds with load 'client.txt' and minimum warmup 24 secs failed to create barrier semaphore 2 of 3 processes prepared for launch 0 sec 3 of 3 processes prepared for launch 0 sec releasing clients 3 207 24.31 MB/sec warmup 1 sec latency 28.562 ms 3 398 21.13 MB/sec warmup 2 sec latency 28.480 ms 3 663 21.15 MB/sec warmup 3 sec latency 33.463 ms 3 869 16.22 MB/sec warmup 4 sec latency 116.338 ms 3 1126 13.79 MB/sec warmup 5 sec latency 54.256 ms 3 1481 13.02 MB/sec warmup 6 sec latency 23.411 ms 3 1830 11.29 MB/sec warmup 7 sec latency 67.958 ms 3 2340 10.44 MB/sec warmup 8 sec latency 47.947 ms 3 2555 9.63 MB/sec warmup 9 sec latency 64.830 ms 3 2982 10.05 MB/sec warmup 10 sec latency 58.982 ms 3 3368 9.67 MB/sec warmup 11 sec latency 47.179 ms 3 3765 9.94 MB/sec warmup 12 sec latency 17.154 ms 3 3947 9.24 MB/sec warmup 13 sec latency 20.364 ms 3 4155 8.59 MB/sec warmup 14 sec latency 20.471 ms 3 4441 8.15 MB/sec warmup 15 sec latency 115.109 ms 3 4768 7.91 MB/sec warmup 16 sec latency 55.478 ms 3 5197 7.99 MB/sec warmup 17 sec latency 69.950 ms 3 5761 7.65 MB/sec warmup 18 sec latency 52.194 ms 3 6031 7.54 MB/sec warmup 19 sec latency 65.992 ms 3 6475 7.89 MB/sec warmup 20 sec latency 56.404 ms 3 6890 7.78 MB/sec warmup 21 sec latency 65.271 ms 3 7270 7.90 MB/sec warmup 22 sec latency 19.185 ms 3 7473 7.72 MB/sec warmup 23 sec latency 20.991 ms 3 7910 1.50 MB/sec execute 1 sec latency 95.042 ms 3 8229 3.01 MB/sec execute 2 sec latency 46.406 ms 3 8650 5.13 MB/sec execute 3 sec latency 60.538 ms 3 9245 4.28 MB/sec execute 4 sec latency 58.738 ms 3 9551 4.29 MB/sec execute 5 sec latency 58.477 ms 3 9988 5.77 MB/sec execute 6 sec latency 68.599 ms 3 10384 6.16 MB/sec execute 7 sec latency 55.382 ms 3 10774 6.66 MB/sec execute 8 sec latency 32.871 ms 3 10983 6.34 MB/sec execute 9 sec latency 21.203 ms 3 11185 5.74 MB/sec execute 10 sec latency 20.637 ms 3 11438 5.32 MB/sec execute 11 sec latency 119.378 ms 3 11776 5.28 MB/sec execute 12 sec latency 44.811 ms 3 12156 5.58 MB/sec execute 13 sec latency 81.612 ms 3 12653 5.28 MB/sec execute 14 sec latency 63.449 ms 3 13070 5.25 MB/sec execute 15 sec latency 57.217 ms 3 13419 5.65 MB/sec execute 16 sec latency 52.105 ms 3 13847 5.72 MB/sec execute 17 sec latency 64.676 ms 3 14224 6.10 MB/sec execute 18 sec latency 55.118 ms 3 14484 5.99 MB/sec execute 19 sec latency 21.244 ms 3 14658 5.73 MB/sec execute 20 sec latency 21.044 ms 3 14906 5.50 MB/sec execute 21 sec latency 103.241 ms 3 15156 5.33 MB/sec execute 22 sec latency 77.778 ms 3 15445 5.27 MB/sec execute 23 sec latency 54.362 ms 3 15859 5.43 MB/sec execute 24 sec latency 63.775 ms 3 16531 5.41 MB/sec execute 25 sec latency 59.446 ms 3 16755 5.33 MB/sec execute 26 sec latency 55.495 ms 3 17211 5.65 MB/sec execute 27 sec latency 61.208 ms 3 17731 5.96 MB/sec execute 28 sec latency 45.558 ms 3 18009 5.90 MB/sec execute 29 sec latency 22.521 ms 3 18184 5.73 MB/sec execute 30 sec latency 21.251 ms 3 18422 5.57 MB/sec execute 31 sec latency 108.490 ms 3 18683 5.45 MB/sec execute 32 sec latency 55.818 ms 3 18988 5.41 MB/sec execute 33 sec latency 46.058 ms 3 19403 5.52 MB/sec execute 34 sec latency 62.699 ms 3 20079 5.50 MB/sec execute 35 sec latency 67.497 ms 3 20272 5.43 MB/sec execute 36 sec latency 60.122 ms 3 20702 5.67 MB/sec execute 37 sec latency 56.770 ms 3 21122 5.67 MB/sec execute 38 sec latency 59.963 ms 3 21525 5.85 MB/sec execute 39 sec latency 19.233 ms 3 21707 5.74 MB/sec execute 40 sec latency 20.634 ms 3 21911 5.60 MB/sec execute 41 sec latency 20.207 ms 3 22200 5.51 MB/sec execute 42 sec latency 105.902 ms 3 22576 5.49 MB/sec execute 43 sec latency 34.085 ms 3 22993 5.58 MB/sec execute 44 sec latency 72.844 ms 3 23631 5.55 MB/sec execute 45 sec latency 43.218 ms 3 23815 5.50 MB/sec execute 46 sec latency 62.799 ms 3 24296 5.69 MB/sec execute 47 sec latency 56.561 ms 3 24682 5.69 MB/sec execute 48 sec latency 56.629 ms 3 25089 5.84 MB/sec execute 49 sec latency 18.079 ms 3 25266 5.74 MB/sec execute 50 sec latency 21.393 ms 3 25482 5.63 MB/sec execute 51 sec latency 20.492 ms 3 25739 5.55 MB/sec execute 52 sec latency 139.244 ms 3 26016 5.53 MB/sec execute 53 sec latency 58.596 ms 3 26384 5.60 MB/sec execute 54 sec latency 63.980 ms 3 26854 5.52 MB/sec execute 55 sec latency 53.841 ms 3 27292 5.51 MB/sec execute 56 sec latency 53.555 ms 3 27635 5.61 MB/sec execute 57 sec latency 65.263 ms 3 28068 5.64 MB/sec execute 58 sec latency 57.004 ms 3 28481 5.76 MB/sec execute 59 sec latency 62.352 ms 3 28713 5.72 MB/sec execute 60 sec latency 22.029 ms 3 28887 5.64 MB/sec execute 61 sec latency 21.389 ms 3 29141 5.56 MB/sec execute 62 sec latency 104.515 ms 3 29420 5.51 MB/sec execute 63 sec latency 70.678 ms 3 29804 5.62 MB/sec execute 64 sec latency 19.887 ms 3 30243 5.54 MB/sec execute 65 sec latency 65.639 ms 3 30794 5.53 MB/sec execute 66 sec latency 52.966 ms 3 31025 5.50 MB/sec execute 67 sec latency 60.852 ms 3 31440 5.62 MB/sec execute 68 sec latency 55.024 ms 3 31946 5.75 MB/sec execute 69 sec latency 46.751 ms 3 32232 5.72 MB/sec execute 70 sec latency 21.999 ms 3 32407 5.66 MB/sec execute 71 sec latency 21.033 ms 3 32642 5.59 MB/sec execute 72 sec latency 116.615 ms 3 32941 5.54 MB/sec execute 73 sec latency 68.500 ms 3 33232 5.51 MB/sec execute 74 sec latency 36.992 ms 3 33725 5.57 MB/sec execute 75 sec latency 47.678 ms 3 34329 5.56 MB/sec execute 76 sec latency 69.277 ms 3 34563 5.53 MB/sec execute 77 sec latency 60.071 ms 3 34978 5.63 MB/sec execute 78 sec latency 68.720 ms 3 35478 5.74 MB/sec execute 79 sec latency 45.392 ms 3 35770 5.73 MB/sec execute 80 sec latency 20.455 ms 3 35944 5.67 MB/sec execute 81 sec latency 21.379 ms 3 36184 5.60 MB/sec execute 82 sec latency 102.875 ms 3 36451 5.56 MB/sec execute 83 sec latency 108.305 ms 3 36996 5.65 MB/sec execute 84 sec latency 40.251 ms 3 37418 5.59 MB/sec execute 85 sec latency 67.354 ms 3 37909 5.58 MB/sec execute 86 sec latency 66.847 ms 3 38255 5.65 MB/sec execute 87 sec latency 46.334 ms 3 38678 5.66 MB/sec execute 88 sec latency 58.432 ms 3 39253 5.78 MB/sec execute 89 sec latency 47.856 ms 3 39433 5.74 MB/sec execute 90 sec latency 21.362 ms 3 39636 5.67 MB/sec execute 91 sec latency 21.012 ms 3 39869 5.63 MB/sec execute 92 sec latency 91.382 ms 3 40186 5.62 MB/sec execute 93 sec latency 49.259 ms 3 40562 5.66 MB/sec execute 94 sec latency 64.945 ms 3 41074 5.61 MB/sec execute 95 sec latency 57.837 ms 3 41487 5.60 MB/sec execute 96 sec latency 54.081 ms 3 41896 5.67 MB/sec execute 97 sec latency 49.428 ms 3 42566 5.80 MB/sec execute 98 sec latency 50.210 ms 3 43052 5.79 MB/sec execute 99 sec latency 10.574 ms 3 43330 5.75 MB/sec execute 100 sec latency 100.392 ms 3 43589 5.71 MB/sec execute 101 sec latency 54.387 ms 3 43960 5.75 MB/sec execute 102 sec latency 31.408 ms 3 44411 5.73 MB/sec execute 103 sec latency 77.814 ms 3 44966 5.72 MB/sec execute 104 sec latency 62.876 ms 3 45191 5.69 MB/sec execute 105 sec latency 66.445 ms 3 45600 5.77 MB/sec execute 106 sec latency 54.280 ms 3 46104 5.85 MB/sec execute 107 sec latency 57.506 ms 3 46432 5.83 MB/sec execute 108 sec latency 16.296 ms 3 46650 5.79 MB/sec execute 109 sec latency 19.746 ms 3 46885 5.75 MB/sec execute 110 sec latency 99.408 ms 3 47184 5.74 MB/sec execute 111 sec latency 45.876 ms 3 47594 5.77 MB/sec execute 112 sec latency 20.630 ms 3 48108 5.73 MB/sec execute 113 sec latency 72.771 ms 3 48560 5.72 MB/sec execute 114 sec latency 55.344 ms 3 48908 5.77 MB/sec execute 115 sec latency 65.566 ms 3 49330 5.78 MB/sec execute 116 sec latency 54.980 ms 3 49769 5.85 MB/sec execute 117 sec latency 50.086 ms 3 49998 5.83 MB/sec execute 118 sec latency 21.825 ms 3 50178 5.78 MB/sec execute 119 sec latency 21.597 ms 3 cleanup 120 sec 0 cleanup 121 sec Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 22168 5.576 32.898 Close 16291 0.693 12.876 Rename 939 14.420 22.086 Unlink 4508 6.317 20.672 Qpathinfo 20105 2.675 15.120 Qfileinfo 3504 0.540 4.063 Qfsinfo 3699 7.353 15.935 Sfileinfo 1814 8.939 18.640 Find 7779 1.037 14.878 WriteX 10969 2.269 21.562 ReadX 34761 0.082 1.299 LockX 72 2.192 3.623 UnlockX 72 2.320 2.903 Flush 1582 29.519 139.235 Throughput 5.78435 MB/sec 3 clients 3 procs max_latency=139.244 ms stopping dbench on /mnt/lustre/d8.sanity-quota at Fri Apr 19 09:28:43 EDT 2024 with return code 0 clean dbench files on /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota /mnt/lustre/d8.sanity-quota removed directory: 'clients/client2/~dmtmp/EXCEL' removed directory: 'clients/client2/~dmtmp/PM' removed directory: 'clients/client2/~dmtmp/PWRPNT' removed directory: 'clients/client2/~dmtmp/WORDPRO' removed directory: 'clients/client2/~dmtmp/SEED' removed directory: 'clients/client2/~dmtmp/COREL' removed directory: 'clients/client2/~dmtmp/PARADOX' removed directory: 'clients/client2/~dmtmp/ACCESS' removed directory: 'clients/client2/~dmtmp/WORD' removed directory: 'clients/client2/~dmtmp' removed directory: 'clients/client2' removed directory: 'clients/client1/~dmtmp/PWRPNT' removed directory: 'clients/client1/~dmtmp/SEED' removed directory: 'clients/client1/~dmtmp/PARADOX' removed directory: 'clients/client1/~dmtmp/EXCEL' removed directory: 'clients/client1/~dmtmp/WORD' removed directory: 'clients/client1/~dmtmp/PM' removed directory: 'clients/client1/~dmtmp/ACCESS' removed directory: 'clients/client1/~dmtmp/WORDPRO' removed directory: 'clients/client1/~dmtmp/COREL' removed directory: 'clients/client1/~dmtmp' removed directory: 'clients/client1' removed directory: 'clients/client0/~dmtmp/SEED' removed directory: 'clients/client0/~dmtmp/PWRPNT' removed directory: 'clients/client0/~dmtmp/PM' removed directory: 'clients/client0/~dmtmp/EXCEL' removed directory: 'clients/client0/~dmtmp/PARADOX' removed directory: 'clients/client0/~dmtmp/WORD' removed directory: 'clients/client0/~dmtmp/ACCESS' removed directory: 'clients/client0/~dmtmp/WORDPRO' removed directory: 'clients/client0/~dmtmp/COREL' removed directory: 'clients/client0/~dmtmp' removed directory: 'clients/client0' removed directory: 'clients' removed 'client.txt' /mnt/lustre/d8.sanity-quota dbench successfully finished lfs project -C /mnt/lustre/d8.sanity-quota Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 8 (176s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_9 skipping SLOW test 9 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 10: Test quota for root user ======== 09:29:05 (1713533345) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted lfs setquota: can't set quota for root usr/group/project. setquota failed: Operation not permitted Waiting 90s for 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 2048 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d10.sanity-quota/f10.sanity-quota] [count=3] [oflag=sync] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.312225 s, 10.1 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 10 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 11: Chown/chgrp ignores quota ======= 09:29:39 (1713533379) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' lfs setquota: warning: inode hardlimit '1' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 1 - lustre-MDT0000_UUID 0 - 0 - 0 - 1 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 1, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 11 (34s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_12a skipping SLOW test 12a debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 12b: Inode quota rebalancing ======== 09:30:15 (1713533415) SKIP: sanity-quota test_12b needs >= 2 MDTs SKIP 12b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 13: Cancel per-ID lock in the LRU list ========================================================== 09:30:18 (1713533418) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 2s: want 'u' got 'u' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d13.sanity-quota/f13.sanity-quota] [count=1] [oflag=sync] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.13112 s, 8.0 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 13 (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 14: check panic in qmt_site_recalc_cb ========================================================== 09:30:59 (1713533459) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID ' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d14.sanity-quota/f14.sanity-quota-0] [count=10] [oflag=direct] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.574347 s, 18.3 MB/s Stop ost1... Stopping /mnt/lustre-ost1 (opts:) on oleg451-server Removing lustre-OST0000_UUID from qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 14 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 15: Set over 4T block quota ========= 09:31:48 (1713533508) sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 15 (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16a: lfs quota should skip the inactive MDT/OST ========================================================== 09:32:03 (1713533523) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d16a.sanity-quota/f16a.sanity-quota] [count=50] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.997457 s, 52.6 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 0 0 10240 - Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 0 0 10240 - lustre-MDT0000_UUID 0 - 0 - 0 - 4096 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 4096, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 512000 - 0 0 10240 - lustre-MDT0000_UUID 0 - 0 - 0 - 4096 - lustre-OST0000_UUID[inact] [0] - [0] - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 4096, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 16a (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 16b: lfs quota should skip the nonexistent MDT/OST ========================================================== 09:32:37 (1713533557) SKIP: sanity-quota test_16b needs >= 3 MDTs SKIP 16b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 17: DQACQ return recoverable error == 09:32:40 (1713533560) DQACQ return -ENOLCK sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' Updated after 2s: want 'ug' got 'ug' Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=37 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 2.75349 s, 381 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete DQACQ return -EAGAIN sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=11 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.09476 s, 339 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete DQACQ return -ETIMEDOUT sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=110 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.11561 s, 337 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete DQACQ return -ENOTCONN sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10240 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_val=107 fail_loc=0xa04 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d17.sanity-quota/f17.sanity-quota] [count=1] [oflag=direct] fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.10408 s, 338 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 17 (178s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 18: MDS failover while writing, no watchdog triggered (b14840) ========================================================== 09:35:40 (1713533740) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' Updated after 2s: want 'u' got 'u' User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (buffered) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 4224 2204416 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3747840 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7514112 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg451-server Stopping /mnt/lustre-mds1 (opts:) on oleg451-server 09:35:55 (1713533755) shut down Failover mds1 to oleg451-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 09:36:09 (1713533769) targets are mounted 09:36:09 (1713533769) facet_failover done oleg451-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 23.5419 s, 4.5 MB/s (dd_pid=4760, time=3, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102406 0 204800 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 102405 - 114688 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 114688 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (limit: 200) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 204800 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Write 100M (directio) ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d18.sanity-quota/f18.sanity-quota] [count=100] [oflag=direct] UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210560 3840 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 7168 3760128 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 10240 7526400 1% /mnt/lustre Fail mds for 40 seconds Failing mds1 on oleg451-server Stopping /mnt/lustre-mds1 (opts:) on oleg451-server 09:36:49 (1713533809) shut down Failover mds1 to oleg451-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 09:37:02 (1713533822) targets are mounted 09:37:02 (1713533822) facet_failover done oleg451-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 30.3693 s, 3.5 MB/s (dd_pid=7093, time=9, timeout=600) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102407 0 204800 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 102406 - 114688 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 114688 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 18 (136s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 19: Updating admin limits doesn't zero operational limits(b14790) ========================================================== 09:37:58 (1713533878) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Updated after 2s: want 'ugp' got 'ugp' Set user quota (limit: 5M) Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 1 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Files for user (quota_usr), count=1: File: '/mnt/lustre/d19.sanity-quota/f19.sanity-quota' Size: 0 Blocks: 1 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205272507901 Links: 1 Access: (0644/-rw-r--r--) Uid: (60000/quota_usr) Gid: (60000/quota_usr) Access: 2024-04-19 09:38:09.000000000 -0400 Modify: 2024-04-19 09:38:09.000000000 -0400 Change: 2024-04-19 09:38:09.000000000 -0400 Birth: - Block quota isn't 0 (u:quota_usr:2). Update quota limits Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 1 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 2 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 1 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Files for user (quota_usr), count=1: File: '/mnt/lustre/d19.sanity-quota/f19.sanity-quota' Size: 0 Blocks: 1 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205272507901 Links: 1 Access: (0644/-rw-r--r--) Uid: (60000/quota_usr) Gid: (60000/quota_usr) Access: 2024-04-19 09:38:09.000000000 -0400 Modify: 2024-04-19 09:38:09.000000000 -0400 Change: 2024-04-19 09:38:09.000000000 -0400 Birth: - Block quota isn't 0 (u:quota_usr:2). running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.207282 s, 20.2 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4101 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 4100 - 5118 - - - - - Total allocated inode limit: 0, total allocated block limit: 5118 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d19.sanity-quota/f19.sanity-quota] [count=6] [seek=6] dd: error writing '/mnt/lustre/d19.sanity-quota/f19.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0607313 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4101 0 5120 - 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 4100 - 5118 - - - - - Total allocated inode limit: 0, total allocated block limit: 5118 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 19 (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 20: Test if setquota specifiers work properly (b15754) ========================================================== 09:38:35 (1713533915) PASS 20 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 21: Setquota while writing & deleting (b16053) ========================================================== 09:38:45 (1713533925) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Set limit(block:10G; file:1000000) for user: quota_usr Set limit(block:10G; file:1000000) for group: quota_usr lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set limit(block:10G; file:) for project: 1000 lfs setquota: warning: block hardlimit '10' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Set quota for 1 times Set quota for 2 times Set quota for 3 times Set quota for 4 times Set quota for 5 times Set quota for 6 times Set quota for 7 times Set quota for 8 times Set quota for 9 times Set quota for 10 times Set quota for 11 times Set quota for 12 times Set quota for 13 times Set quota for 14 times Set quota for 15 times Set quota for 16 times Set quota for 17 times Set quota for 18 times Set quota for 19 times Set quota for 20 times Set quota for 21 times (dd_pid=14446, time=0)successful (dd_pid=14447, time=4)successful Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 21 (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 22: enable/disable quota by 'lctl conf_param/set_param -P' ========================================================== 09:39:57 (1713533997) Set both mdt & ost quota type as ug Waiting 90s for 'ugp' Restart... Stopping clients: oleg451-client.virtnet /mnt/lustre (opts:) Stopping client oleg451-client.virtnet /mnt/lustre opts: Stopping clients: oleg451-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg451-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14074) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg451-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg451-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg451-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg451-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Starting client oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Started clients oleg451-client.virtnet: 192.168.204.151@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8801373b9800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8801373b9800.idle_timeout=debug Verify if quota is enabled Set both mdt & ost quota type as none Waiting 90s for 'none' Updated after 2s: want 'none' got 'none' Restart... Stopping clients: oleg451-client.virtnet /mnt/lustre (opts:) Stopping client oleg451-client.virtnet /mnt/lustre opts: Stopping clients: oleg451-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg451-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14074) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg451-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg451-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg451-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg451-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Starting client oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Started clients oleg451-client.virtnet: 192.168.204.151@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88013690a800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88013690a800.idle_timeout=debug Verify if quota is disabled PASS 22 (86s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 23: Quota should be honored with directIO (b16125) ========================================================== 09:41:24 (1713534084) SKIP: sanity-quota test_23 Overwrite in place is not guaranteed to be space neutral on ZFS SKIP 23 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 24: lfs draws an asterix when limit is reached (b16646) ========================================================== 09:41:27 (1713534087) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Set user quota (limit: 5M) running as uid/gid/euid/egid 0/0/0/0, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d24.sanity-quota/f24.sanity-quota] [count=6] 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.162322 s, 38.8 MB/s /mnt/lustre 6149* 0 5120 - 1 0 0 - 2* - 2 - 1 - 0 - 6148* - 6148 - - - - - Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 24 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 25: check indexes versions ========== 09:42:00 (1713534120) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.201546 s, 26.0 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=5] [seek=5] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 5+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.280456 s, 15.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d25.sanity-quota/f25.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0743374 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 25 (58s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27a: lfs quota/setquota should handle wrong arguments (b19612) ========================================================== 09:43:00 (1713534180) lfs quota: name and mount point must be specified Display disk usage and limits. usage: quota [-q] [-v] [-h] [-o OBD_UUID|-i MDT_IDX|-I OST_IDX] [{-u|-g|-p} UNAME|UID|GNAME|GID|PROJID] [--pool ] quota -t <-u|-g|-p> [--pool ] quota [-q] [-v] [h] {-U|-G|-P} [--pool ] quota -a {-u|-g|-p} [-s start_qid] [-e end_qid] lfs setquota: either -u or -g must be specified setquota failed: Unknown error -4 Set filesystem quotas. usage: setquota [-t][-D] {-u|-U|-g|-G|-p|-P} {-b|-B|-i|-I LIMIT} [--pool POOL] FILESYSTEM setquota {-u|-g|-p} --delete FILESYSTEM PASS 27a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27b: lfs quota/setquota should handle user/group/project ID (b20200) ========================================================== 09:43:04 (1713534184) lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: block hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode softlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details lfs setquota: warning: inode hardlimit '1000' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for usr 60000 (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp 60000 (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 1000 1000 - 0 1000 1000 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 PASS 27b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27c: lfs quota should support human-readable output ========================================================== 09:43:11 (1713534191) PASS 27c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 27d: lfs setquota should support fraction block limit ========================================================== 09:43:16 (1713534196) PASS 27d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 30: Hard limit updates should not reset grace times ========================================================== 09:43:22 (1713534202) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'u' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.523816 s, 16.0 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8197* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 8196 - 9220 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9220 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 2+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0998472 s, 10.5 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9221* 4096 0 expired 1 0 0 - lustre-MDT0000_UUID 2* - 2 - 1 - 0 - lustre-OST0000_UUID 9220* - 9220 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 9220 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d30.sanity-quota/f30.sanity-quota] [conv=notrunc] [oflag=append] [count=4] dd: error writing '/mnt/lustre/d30.sanity-quota/f30.sanity-quota': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.033404 s, 0.0 kB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 30 (41s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 33: Basic usage tracking for user & group & project ========================================================== 09:44:05 (1713534245) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write files... lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-0 Iteration 0/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-1 Iteration 1/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-2 Iteration 2/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-3 Iteration 3/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-4 Iteration 4/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-5 Iteration 5/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-6 Iteration 6/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-7 Iteration 7/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-8 Iteration 8/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-9 Iteration 9/10 completed lfs project -p 1000 /mnt/lustre/d33.sanity-quota/f33.sanity-quota-10 Iteration 10/10 completed Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage after write Verify inode usage after write Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage after delete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 33 (68s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 34: Usage transfer for user & group & project ========================================================== 09:45:15 (1713534315) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... chown the file to user 60000 Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage for user 60000 chgrp the file to group 60000 Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage for group 60000 chown the file to user 60001 Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete change_project project id to 1000 lfs project -p 1000 /mnt/lustre/d34.sanity-quota/f34.sanity-quota Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Verify disk usage for user 60001/60000 and group 60000 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 34 (108s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 35: Usage is still accessible across reboot ========================================================== 09:47:06 (1713534426) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... lfs project -p 1000 /mnt/lustre/d35.sanity-quota/f35.sanity-quota Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Save disk usage before restart User 60000: 2052KB 1 inodes Group 60000: 2052KB 1 inodes Project 1000: 2052KB 1 inodes Restart... Stopping clients: oleg451-client.virtnet /mnt/lustre (opts:) Stopping client oleg451-client.virtnet /mnt/lustre opts: Stopping clients: oleg451-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg451-server Checking servers environments Checking clients oleg451-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions oleg451-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Starting client oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Started clients oleg451-client.virtnet: 192.168.204.151@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a89b5000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a89b5000.idle_timeout=debug affected facets: Verify disk usage after restart Append to the same file... Verify space usage is increased Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 35 (98s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 37: Quota accounted properly for file created by 'lfs setstripe' ========================================================== 09:48:46 (1713534526) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.139087 s, 7.5 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 37 (49s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 38: Quota accounting iterator doesn't skip id entries ========================================================== 09:49:37 (1713534577) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Create 10000 files... Found 10000 id entries Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 38 (554s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 39: Project ID interface works correctly ========================================================== 09:58:53 (1713535133) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -p 1024 /mnt/lustre/d39.sanity-quota/project Stopping clients: oleg451-client.virtnet /mnt/lustre (opts:) Stopping client oleg451-client.virtnet /mnt/lustre opts: Stopping clients: oleg451-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg451-server sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/nbd0 on / type ext4 (ro,relatime,stripe=32,data=ordered) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14074) mqueue on /dev/mqueue type mqueue (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) none on /mnt type ramfs (rw,relatime) none on /var/lib/stateless/writable type tmpfs (rw,relatime) /dev/vda on /home/green/git/lustre-release type squashfs (ro,relatime) none on /var/cache/man type tmpfs (rw,relatime) none on /var/log type tmpfs (rw,relatime) none on /var/lib/dbus type tmpfs (rw,relatime) none on /tmp type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) none on /var/tmp type tmpfs (rw,relatime) none on /var/lib/NetworkManager type tmpfs (rw,relatime) none on /var/lib/systemd/random-seed type tmpfs (rw,relatime) none on /var/spool type tmpfs (rw,relatime) none on /var/lib/nfs type tmpfs (rw,relatime) none on /var/lib/gssproxy type tmpfs (rw,relatime) none on /var/lib/logrotate type tmpfs (rw,relatime) none on /etc type tmpfs (rw,relatime) none on /var/lib/rsyslog type tmpfs (rw,relatime) none on /var/lib/dhclient type tmpfs (rw,relatime) 192.168.200.253:/exports/state/oleg451-client.virtnet on /var/lib/stateless/state type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg451-client.virtnet/boot on /boot type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) 192.168.200.253:/exports/state/oleg451-client.virtnet/etc/kdump.conf on /etc/kdump.conf type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) 192.168.200.253:/exports/testreports/42140/testresults/sanity-quota-zfs-centos7_x86_64-centos7_x86_64 on /tmp/testlogs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.204.51,local_lock=none,addr=192.168.200.253) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=382032k,mode=700) /dev/vda on /usr/sbin/mount.lustre type squashfs (ro,relatime) Checking servers environments Checking clients oleg451-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Starting client oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Started clients oleg451-client.virtnet: 192.168.204.151@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a9c0c800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a9c0c800.idle_timeout=debug Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 39 (60s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40a: Hard link across different project ID ========================================================== 09:59:54 (1713535194) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40a.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40a.sanity-quota/dir2 ln: failed to create hard link '/mnt/lustre/d40a.sanity-quota/dir2/1_link' => '/mnt/lustre/d40a.sanity-quota/dir1/1': Invalid cross-device link Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 40a (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40b: Mv across different project ID ========================================================== 10:00:25 (1713535225) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d40b.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d40b.sanity-quota/dir2 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 40b (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40c: Remote child Dir inherit project quota properly ========================================================== 10:00:55 (1713535255) SKIP: sanity-quota test_40c needs >= 2 MDTs SKIP 40c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 40d: Stripe Directory inherit project quota properly ========================================================== 10:00:57 (1713535257) SKIP: sanity-quota test_40d needs >= 2 MDTs SKIP 40d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 41: df should return projid-specific values ========================================================== 10:00:59 (1713535259) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Waiting 90s for 'ugp' lfs project -sp 41000 /mnt/lustre/d41.sanity-quota/dir == global statfs: /mnt/lustre == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.204.151@tcp:/lustre 7542784 8192 7530496 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.204.151@tcp:/lustre 235836 380 235456 1% /mnt/lustre Disk quotas for prj 41000 (pid 41000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/d41.sanity-quota/dir 12 0 102400 - 1 0 4096 - == project statfs (prjid=41000): /mnt/lustre/d41.sanity-quota/dir == Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.204.151@tcp:/lustre 102400 12 102388 1% /mnt/lustre Filesystem Inodes IUsed IFree IUse% Mounted on 192.168.204.151@tcp:/lustre 4096 1 4095 1% /mnt/lustre llite.lustre-ffff8800a9c0c800.statfs_project=0 llite.lustre-ffff8800a9c0c800.statfs_project=1 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 41 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 48: lfs quota --delete should delete quota project ID ========================================================== 10:01:36 (1713535296) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0313985 s, 33.4 MB/s - id: 60000 osd-zfs - id: 60000 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0404251 s, 25.9 MB/s - id: 60000 cat: /proc/fs/lustre/osd-zfs/lustre-OST0000/quota_slave/limit_user: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0407472 s, 25.7 MB/s - id: 60000 osd-zfs - id: 60000 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0407648 s, 25.7 MB/s - id: 60000 cat: /proc/fs/lustre/osd-zfs/lustre-OST0000/quota_slave/limit_group: No such file or directory - id: 60000 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0285933 s, 36.7 MB/s - id: 10000 osd-zfs - id: 10000 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d48.sanity-quota/f48.sanity-quota] [count=1] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0402692 s, 26.0 MB/s - id: 10000 cat: /proc/fs/lustre/osd-zfs/lustre-OST0000/quota_slave/limit_project: No such file or directory - id: 10000 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 48 (58s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 49: lfs quota -a prints the quota usage for all quota IDs ========================================================== 10:02:36 (1713535356) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 setquota for users and groups fail_loc=0xa09 lfs setquota: 1000 / 29 seconds fail_loc=0 903 0 0 102400 - 0 0 10240 - 904 0 0 102400 - 0 0 10240 - 905 0 0 102400 - 0 0 10240 - 906 0 0 102400 - 0 0 10240 - 907 0 0 102400 - 0 0 10240 - 908 0 0 102400 - 0 0 10240 - 909 0 0 102400 - 0 0 10240 - 910 0 0 102400 - 0 0 10240 - 911 0 0 102400 - 0 0 10240 - 912 0 0 102400 - 0 0 10240 - 913 0 0 102400 - 0 0 10240 - 914 0 0 102400 - 0 0 10240 - 915 0 0 102400 - 0 0 10240 - 916 0 0 102400 - 0 0 10240 - 917 0 0 102400 - 0 0 10240 - 918 0 0 102400 - 0 0 10240 - 919 0 0 102400 - 0 0 10240 - 920 0 0 102400 - 0 0 10240 - 921 0 0 102400 - 0 0 10240 - 922 0 0 102400 - 0 0 10240 - 923 0 0 102400 - 0 0 10240 - 924 0 0 102400 - 0 0 10240 - 925 0 0 102400 - 0 0 10240 - 926 0 0 102400 - 0 0 10240 - 927 0 0 102400 - 0 0 10240 - 928 0 0 102400 - 0 0 10240 - 929 0 0 102400 - 0 0 10240 - 930 0 0 102400 - 0 0 10240 - 931 0 0 102400 - 0 0 10240 - 932 0 0 102400 - 0 0 10240 - 933 0 0 102400 - 0 0 10240 - 934 0 0 102400 - 0 0 10240 - 935 0 0 102400 - 0 0 10240 - 936 0 0 102400 - 0 0 10240 - 937 0 0 102400 - 0 0 10240 - 938 0 0 102400 - 0 0 10240 - 939 0 0 102400 - 0 0 10240 - 940 0 0 102400 - 0 0 10240 - 941 0 0 102400 - 0 0 10240 - 942 0 0 102400 - 0 0 10240 - 943 0 0 102400 - 0 0 10240 - 944 0 0 102400 - 0 0 10240 - 945 0 0 102400 - 0 0 10240 - 946 0 0 102400 - 0 0 10240 - 947 0 0 102400 - 0 0 10240 - 948 0 0 102400 - 0 0 10240 - 949 0 0 102400 - 0 0 10240 - 950 0 0 102400 - 0 0 10240 - 951 0 0 102400 - 0 0 10240 - 952 0 0 102400 - 0 0 10240 - 953 0 0 102400 - 0 0 10240 - 954 0 0 102400 - 0 0 10240 - 955 0 0 102400 - 0 0 10240 - 956 0 0 102400 - 0 0 10240 - 957 0 0 102400 - 0 0 10240 - 958 0 0 102400 - 0 0 10240 - 959 0 0 102400 - 0 0 10240 - 960 0 0 102400 - 0 0 10240 - 961 0 0 102400 - 0 0 10240 - 962 0 0 102400 - 0 0 10240 - 963 0 0 102400 - 0 0 10240 - 964 0 0 102400 - 0 0 10240 - 965 0 0 102400 - 0 0 10240 - 966 0 0 102400 - 0 0 10240 - 967 0 0 102400 - 0 0 10240 - 968 0 0 102400 - 0 0 10240 - 969 0 0 102400 - 0 0 10240 - 970 0 0 102400 - 0 0 10240 - 971 0 0 102400 - 0 0 10240 - 972 0 0 102400 - 0 0 10240 - 973 0 0 102400 - 0 0 10240 - 974 0 0 102400 - 0 0 10240 - 975 0 0 102400 - 0 0 10240 - 976 0 0 102400 - 0 0 10240 - 977 0 0 102400 - 0 0 10240 - 978 0 0 102400 - 0 0 10240 - 979 0 0 102400 - 0 0 10240 - 980 0 0 102400 - 0 0 10240 - 981 0 0 102400 - 0 0 10240 - 982 0 0 102400 - 0 0 10240 - 983 0 0 102400 - 0 0 10240 - 984 0 0 102400 - 0 0 10240 - 985 0 0 102400 - 0 0 10240 - 986 0 0 102400 - 0 0 10240 - 987 0 0 102400 - 0 0 10240 - 988 0 0 102400 - 0 0 10240 - 989 0 0 102400 - 0 0 10240 - 990 0 0 102400 - 0 0 10240 - 991 0 0 102400 - 0 0 10240 - 992 0 0 102400 - 0 0 10240 - 993 0 0 102400 - 0 0 10240 - 994 0 0 102400 - 0 0 10240 - 995 0 0 102400 - 0 0 10240 - 996 0 0 102400 - 0 0 10240 - 997 0 0 102400 - 0 0 10240 - 998 0 0 102400 - 0 0 10240 - polkitd 0 0 102400 - 0 0 10240 - green 0 0 102400 - 0 0 10240 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all usr quota: 1000 / 0 seconds 903 0 0 204800 - 0 0 20480 - 904 0 0 204800 - 0 0 20480 - 905 0 0 204800 - 0 0 20480 - 906 0 0 204800 - 0 0 20480 - 907 0 0 204800 - 0 0 20480 - 908 0 0 204800 - 0 0 20480 - 909 0 0 204800 - 0 0 20480 - 910 0 0 204800 - 0 0 20480 - 911 0 0 204800 - 0 0 20480 - 912 0 0 204800 - 0 0 20480 - 913 0 0 204800 - 0 0 20480 - 914 0 0 204800 - 0 0 20480 - 915 0 0 204800 - 0 0 20480 - 916 0 0 204800 - 0 0 20480 - 917 0 0 204800 - 0 0 20480 - 918 0 0 204800 - 0 0 20480 - 919 0 0 204800 - 0 0 20480 - 920 0 0 204800 - 0 0 20480 - 921 0 0 204800 - 0 0 20480 - 922 0 0 204800 - 0 0 20480 - 923 0 0 204800 - 0 0 20480 - 924 0 0 204800 - 0 0 20480 - 925 0 0 204800 - 0 0 20480 - 926 0 0 204800 - 0 0 20480 - 927 0 0 204800 - 0 0 20480 - 928 0 0 204800 - 0 0 20480 - 929 0 0 204800 - 0 0 20480 - 930 0 0 204800 - 0 0 20480 - 931 0 0 204800 - 0 0 20480 - 932 0 0 204800 - 0 0 20480 - 933 0 0 204800 - 0 0 20480 - 934 0 0 204800 - 0 0 20480 - 935 0 0 204800 - 0 0 20480 - 936 0 0 204800 - 0 0 20480 - 937 0 0 204800 - 0 0 20480 - 938 0 0 204800 - 0 0 20480 - 939 0 0 204800 - 0 0 20480 - 940 0 0 204800 - 0 0 20480 - 941 0 0 204800 - 0 0 20480 - 942 0 0 204800 - 0 0 20480 - 943 0 0 204800 - 0 0 20480 - 944 0 0 204800 - 0 0 20480 - 945 0 0 204800 - 0 0 20480 - 946 0 0 204800 - 0 0 20480 - 947 0 0 204800 - 0 0 20480 - 948 0 0 204800 - 0 0 20480 - 949 0 0 204800 - 0 0 20480 - 950 0 0 204800 - 0 0 20480 - 951 0 0 204800 - 0 0 20480 - 952 0 0 204800 - 0 0 20480 - 953 0 0 204800 - 0 0 20480 - 954 0 0 204800 - 0 0 20480 - 955 0 0 204800 - 0 0 20480 - 956 0 0 204800 - 0 0 20480 - 957 0 0 204800 - 0 0 20480 - 958 0 0 204800 - 0 0 20480 - 959 0 0 204800 - 0 0 20480 - 960 0 0 204800 - 0 0 20480 - 961 0 0 204800 - 0 0 20480 - 962 0 0 204800 - 0 0 20480 - 963 0 0 204800 - 0 0 20480 - 964 0 0 204800 - 0 0 20480 - 965 0 0 204800 - 0 0 20480 - 966 0 0 204800 - 0 0 20480 - 967 0 0 204800 - 0 0 20480 - 968 0 0 204800 - 0 0 20480 - 969 0 0 204800 - 0 0 20480 - 970 0 0 204800 - 0 0 20480 - 971 0 0 204800 - 0 0 20480 - 972 0 0 204800 - 0 0 20480 - 973 0 0 204800 - 0 0 20480 - 974 0 0 204800 - 0 0 20480 - 975 0 0 204800 - 0 0 20480 - 976 0 0 204800 - 0 0 20480 - 977 0 0 204800 - 0 0 20480 - 978 0 0 204800 - 0 0 20480 - 979 0 0 204800 - 0 0 20480 - 980 0 0 204800 - 0 0 20480 - 981 0 0 204800 - 0 0 20480 - 982 0 0 204800 - 0 0 20480 - 983 0 0 204800 - 0 0 20480 - 984 0 0 204800 - 0 0 20480 - 985 0 0 204800 - 0 0 20480 - 986 0 0 204800 - 0 0 20480 - 987 0 0 204800 - 0 0 20480 - 988 0 0 204800 - 0 0 20480 - 989 0 0 204800 - 0 0 20480 - 990 0 0 204800 - 0 0 20480 - 991 0 0 204800 - 0 0 20480 - 992 0 0 204800 - 0 0 20480 - 993 0 0 204800 - 0 0 20480 - 994 0 0 204800 - 0 0 20480 - systemd-network 0 0 204800 - 0 0 20480 - systemd-bus-proxy 0 0 204800 - 0 0 20480 - input 0 0 204800 - 0 0 20480 - polkitd 0 0 204800 - 0 0 20480 - ssh_keys 0 0 204800 - 0 0 20480 - green 0 0 204800 - 0 0 20480 - quota_usr 0 0 0 - 0 [0] [0] - quota_2usr 0 0 0 - 0 0 0 - get all grp quota: 1000 / 0 seconds Create 991 files... - open/close 790 (time 1713535410.43 total 10.00 last 78.96) total: 991 open/close in 12.34 seconds: 80.34 ops/second 951 6 0 102400 - 1 0 10240 - 952 6 0 102400 - 1 0 10240 - 953 6 0 102400 - 1 0 10240 - 954 6 0 102400 - 1 0 10240 - 955 6 0 102400 - 1 0 10240 - 956 6 0 102400 - 1 0 10240 - 957 6 0 102400 - 1 0 10240 - 958 6 0 102400 - 1 0 10240 - 959 6 0 102400 - 1 0 10240 - 960 6 0 102400 - 1 0 10240 - 961 6 0 102400 - 1 0 10240 - 962 6 0 102400 - 1 0 10240 - 963 6 0 102400 - 1 0 10240 - 964 6 0 102400 - 1 0 10240 - 965 6 0 102400 - 1 0 10240 - 966 6 0 102400 - 1 0 10240 - 967 6 0 102400 - 1 0 10240 - 968 6 0 102400 - 1 0 10240 - 969 6 0 102400 - 1 0 10240 - 970 6 0 102400 - 1 0 10240 - 971 6 0 102400 - 1 0 10240 - 972 6 0 102400 - 1 0 10240 - 973 6 0 102400 - 1 0 10240 - 974 6 0 102400 - 1 0 10240 - 975 6 0 102400 - 1 0 10240 - 976 6 0 102400 - 1 0 10240 - 977 6 0 102400 - 1 0 10240 - 978 6 0 102400 - 1 0 10240 - 979 6 0 102400 - 1 0 10240 - 980 6 0 102400 - 1 0 10240 - 981 6 0 102400 - 1 0 10240 - 982 6 0 102400 - 1 0 10240 - 983 6 0 102400 - 1 0 10240 - 984 6 0 102400 - 1 0 10240 - 985 6 0 102400 - 1 0 10240 - 986 6 0 102400 - 1 0 10240 - 987 6 0 102400 - 1 0 10240 - 988 6 0 102400 - 1 0 10240 - 989 6 0 102400 - 1 0 10240 - 990 6 0 102400 - 1 0 10240 - 991 6 0 102400 - 1 0 10240 - 992 6 0 102400 - 1 0 10240 - 993 6 0 102400 - 1 0 10240 - 994 6 0 102400 - 1 0 10240 - 995 6 0 102400 - 1 0 10240 - 996 6 0 102400 - 1 0 10240 - 997 6 0 102400 - 1 0 10240 - 998 6 0 102400 - 1 0 10240 - polkitd 6 0 102400 - 1 0 10240 - green 6 0 102400 - 1 0 10240 - time=0, rate=991/0 951 6 0 204800 - 1 0 20480 - 952 6 0 204800 - 1 0 20480 - 953 6 0 204800 - 1 0 20480 - 954 6 0 204800 - 1 0 20480 - 955 6 0 204800 - 1 0 20480 - 956 6 0 204800 - 1 0 20480 - 957 6 0 204800 - 1 0 20480 - 958 6 0 204800 - 1 0 20480 - 959 6 0 204800 - 1 0 20480 - 960 6 0 204800 - 1 0 20480 - 961 6 0 204800 - 1 0 20480 - 962 6 0 204800 - 1 0 20480 - 963 6 0 204800 - 1 0 20480 - 964 6 0 204800 - 1 0 20480 - 965 6 0 204800 - 1 0 20480 - 966 6 0 204800 - 1 0 20480 - 967 6 0 204800 - 1 0 20480 - 968 6 0 204800 - 1 0 20480 - 969 6 0 204800 - 1 0 20480 - 970 6 0 204800 - 1 0 20480 - 971 6 0 204800 - 1 0 20480 - 972 6 0 204800 - 1 0 20480 - 973 6 0 204800 - 1 0 20480 - 974 6 0 204800 - 1 0 20480 - 975 6 0 204800 - 1 0 20480 - 976 6 0 204800 - 1 0 20480 - 977 6 0 204800 - 1 0 20480 - 978 6 0 204800 - 1 0 20480 - 979 6 0 204800 - 1 0 20480 - 980 6 0 204800 - 1 0 20480 - 981 6 0 204800 - 1 0 20480 - 982 6 0 204800 - 1 0 20480 - 983 6 0 204800 - 1 0 20480 - 984 6 0 204800 - 1 0 20480 - 985 6 0 204800 - 1 0 20480 - 986 6 0 204800 - 1 0 20480 - 987 6 0 204800 - 1 0 20480 - 988 6 0 204800 - 1 0 20480 - 989 6 0 204800 - 1 0 20480 - 990 6 0 204800 - 1 0 20480 - 991 6 0 204800 - 1 0 20480 - 992 6 0 204800 - 1 0 20480 - 993 6 0 204800 - 1 0 20480 - 994 6 0 204800 - 1 0 20480 - systemd-network 6 0 204800 - 1 0 20480 - systemd-bus-proxy 6 0 204800 - 1 0 20480 - input 6 0 204800 - 1 0 20480 - polkitd 6 0 204800 - 1 0 20480 - ssh_keys 6 0 204800 - 1 0 20480 - green 6 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713535422 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second Create 991 files... - open/close 780 (time 1713535443.17 total 10.01 last 77.91) total: 991 open/close in 12.81 seconds: 77.37 ops/second 951 6 0 102400 - 1 0 10240 - 952 6 0 102400 - 1 0 10240 - 953 6 0 102400 - 1 0 10240 - 954 6 0 102400 - 1 0 10240 - 955 6 0 102400 - 1 0 10240 - 956 6 0 102400 - 1 0 10240 - 957 6 0 102400 - 1 0 10240 - 958 6 0 102400 - 1 0 10240 - 959 6 0 102400 - 1 0 10240 - 960 6 0 102400 - 1 0 10240 - 961 6 0 102400 - 1 0 10240 - 962 6 0 102400 - 1 0 10240 - 963 6 0 102400 - 1 0 10240 - 964 6 0 102400 - 1 0 10240 - 965 6 0 102400 - 1 0 10240 - 966 6 0 102400 - 1 0 10240 - 967 6 0 102400 - 1 0 10240 - 968 6 0 102400 - 1 0 10240 - 969 6 0 102400 - 1 0 10240 - 970 6 0 102400 - 1 0 10240 - 971 6 0 102400 - 1 0 10240 - 972 6 0 102400 - 1 0 10240 - 973 6 0 102400 - 1 0 10240 - 974 6 0 102400 - 1 0 10240 - 975 6 0 102400 - 1 0 10240 - 976 6 0 102400 - 1 0 10240 - 977 6 0 102400 - 1 0 10240 - 978 6 0 102400 - 1 0 10240 - 979 6 0 102400 - 1 0 10240 - 980 6 0 102400 - 1 0 10240 - 981 6 0 102400 - 1 0 10240 - 982 6 0 102400 - 1 0 10240 - 983 6 0 102400 - 1 0 10240 - 984 6 0 102400 - 1 0 10240 - 985 6 0 102400 - 1 0 10240 - 986 6 0 102400 - 1 0 10240 - 987 6 0 102400 - 1 0 10240 - 988 6 0 102400 - 1 0 10240 - 989 6 0 102400 - 1 0 10240 - 990 6 0 102400 - 1 0 10240 - 991 6 0 102400 - 1 0 10240 - 992 6 0 102400 - 1 0 10240 - 993 6 0 102400 - 1 0 10240 - 994 6 0 102400 - 1 0 10240 - 995 6 0 102400 - 1 0 10240 - 996 6 0 102400 - 1 0 10240 - 997 6 0 102400 - 1 0 10240 - 998 6 0 102400 - 1 0 10240 - polkitd 6 0 102400 - 1 0 10240 - green 6 0 102400 - 1 0 10240 - time=0, rate=991/0 951 6 0 204800 - 1 0 20480 - 952 6 0 204800 - 1 0 20480 - 953 6 0 204800 - 1 0 20480 - 954 6 0 204800 - 1 0 20480 - 955 6 0 204800 - 1 0 20480 - 956 6 0 204800 - 1 0 20480 - 957 6 0 204800 - 1 0 20480 - 958 6 0 204800 - 1 0 20480 - 959 6 0 204800 - 1 0 20480 - 960 6 0 204800 - 1 0 20480 - 961 6 0 204800 - 1 0 20480 - 962 6 0 204800 - 1 0 20480 - 963 6 0 204800 - 1 0 20480 - 964 6 0 204800 - 1 0 20480 - 965 6 0 204800 - 1 0 20480 - 966 6 0 204800 - 1 0 20480 - 967 6 0 204800 - 1 0 20480 - 968 6 0 204800 - 1 0 20480 - 969 6 0 204800 - 1 0 20480 - 970 6 0 204800 - 1 0 20480 - 971 6 0 204800 - 1 0 20480 - 972 6 0 204800 - 1 0 20480 - 973 6 0 204800 - 1 0 20480 - 974 6 0 204800 - 1 0 20480 - 975 6 0 204800 - 1 0 20480 - 976 6 0 204800 - 1 0 20480 - 977 6 0 204800 - 1 0 20480 - 978 6 0 204800 - 1 0 20480 - 979 6 0 204800 - 1 0 20480 - 980 6 0 204800 - 1 0 20480 - 981 6 0 204800 - 1 0 20480 - 982 6 0 204800 - 1 0 20480 - 983 6 0 204800 - 1 0 20480 - 984 6 0 204800 - 1 0 20480 - 985 6 0 204800 - 1 0 20480 - 986 6 0 204800 - 1 0 20480 - 987 6 0 204800 - 1 0 20480 - 988 6 0 204800 - 1 0 20480 - 989 6 0 204800 - 1 0 20480 - 990 6 0 204800 - 1 0 20480 - 991 6 0 204800 - 1 0 20480 - 992 6 0 204800 - 1 0 20480 - 993 6 0 204800 - 1 0 20480 - 994 6 0 204800 - 1 0 20480 - systemd-network 6 0 204800 - 1 0 20480 - systemd-bus-proxy 6 0 204800 - 1 0 20480 - input 6 0 204800 - 1 0 20480 - polkitd 6 0 204800 - 1 0 20480 - ssh_keys 6 0 204800 - 1 0 20480 - green 6 0 204800 - 1 0 20480 - time=0, rate=991/0 - unlinked 0 (time 1713535455 ; total 0 ; last 0) total: 991 unlinks in 3 seconds: 330.333344 unlinks/second fail_loc=0xa08 fail_loc=0 Stopping clients: oleg451-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg451-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg451-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg451-server oleg451-server: oleg451-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: lustre-mdt1/mdt1 Format ost1: lustre-ost1/ost1 Format ost2: lustre-ost2/ost2 Checking servers environments Checking clients oleg451-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1/mdt1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Commit the device label on lustre-ost1/ost1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Commit the device label on lustre-ost2/ost2 Started lustre-OST0001 Starting client: oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Starting client oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Started clients oleg451-client.virtnet: 192.168.204.151@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88009ac8f000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88009ac8f000.idle_timeout=debug Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 49 (245s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 50: Test if lfs find --projid works ========================================================== 10:06:43 (1713535603) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d50.sanity-quota/dir1 lfs project -sp 2 /mnt/lustre/d50.sanity-quota/dir2 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 50 (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 51: Test project accounting with mv/cp ========================================================== 10:07:13 (1713535633) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1 /mnt/lustre/d51.sanity-quota/dir 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0146832 s, 71.4 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 51 (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 52: Rename normal file across project ID ========================================================== 10:07:49 (1713535669) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.802817 s, 131 MB/s Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102417 0 0 - 2 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 12 0 0 - 1 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting rename '/mnt/lustre/d52.sanity-quota/t52_dir1' returned -1: Invalid cross-device link rename directory return 255 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 12 0 0 - 1 0 0 - Disk quotas for prj 1001 (pid 1001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 102417 0 0 - 2 0 0 - pid 1001 is using default block quota setting pid 1001 is using default file quota setting Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 52 (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 53: Project inherit attribute could be cleared ========================================================== 10:08:23 (1713535703) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -s /mnt/lustre/d53.sanity-quota/dir lfs project -C /mnt/lustre/d53.sanity-quota/dir Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 53 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 54: basic lfs project interface test ========================================================== 10:08:42 (1713535722) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 lfs project -sp 1000 /mnt/lustre/d54.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d54.sanity-quota/f54.sanity-quota-0] [100] total: 100 create in 0.25 seconds: 394.22 ops/second lfs project -rCk /mnt/lustre/d54.sanity-quota lfs project -rC /mnt/lustre/d54.sanity-quota - unlinked 0 (time 1713535732 ; total 0 ; last 0) total: 100 unlinks in 0 seconds: inf unlinks/second Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 54 (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 55: Chgrp should be affected by group quota ========================================================== 10:09:04 (1713535744) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d55.sanity-quota/f55.sanity-quota] [bs=1024] [count=100000] 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 11.9359 s, 8.6 MB/s Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 51200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] chgrp: changing group of '/mnt/lustre/d55.sanity-quota/f55.sanity-quota': Disk quota exceeded 0 Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60001/60000/60001, groups: [chgrp] [quota_2usr] [/mnt/lustre/d55.sanity-quota/f55.sanity-quota] Disk quotas for grp quota_2usr (gid 60001): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 307200 - 0 0 0 - lustre-MDT0000_UUID 0 - 131072 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 55 (48s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 56: lfs quota -t should work well === 10:09:54 (1713535794) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 56 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 57: lfs project could tolerate errors ========================================================== 10:10:15 (1713535815) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 57 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 58: project ID should be kept for new mirrors created by FID ========================================================== 10:10:44 (1713535844) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] test by mirror created with normal file running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.72936 s, 19.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.63166 s, 19.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete test by mirror created with FID running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=50] [conv=nocreat] [oflag=direct] 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.60992 s, 20.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d58.sanity-quota/f58.sanity-quota] [count=30] [conv=nocreat] [seek=50] [oflag=direct] 30+0 records in 30+0 records out 31457280 bytes (31 MB) copied, 1.68612 s, 18.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [mirror] [resync] [/mnt/lustre/d58.sanity-quota/f58.sanity-quota] lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror mirror: component 131073 not synced: Disk quota exceeded (122) lfs mirror mirror: component 196609 not synced: Disk quota exceeded (122) lfs mirror: '/mnt/lustre/d58.sanity-quota/f58.sanity-quota' llapi_mirror_resync_many: Disk quota exceeded. lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 58 (78s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 59: lfs project dosen't crash kernel with project disabled ========================================================== 10:12:04 (1713535924) SKIP: sanity-quota test_59 ldiskfs only test SKIP 59 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 60: Test quota for root with setgid ========================================================== 10:12:07 (1713535927) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ug' lfs setquota: warning: inode hardlimit '100' smaller than minimum qunit size See 'lfs help setquota' or Lustre manual for details Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 100 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d60.sanity-quota/f60.sanity-quota] [99] total: 99 create in 0.39 seconds: 255.73 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] touch: cannot touch '/mnt/lustre/d60.sanity-quota/foo': Disk quota exceeded running as uid/gid/euid/egid 0/0/0/0, groups: [touch] [/mnt/lustre/d60.sanity-quota/foo] Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 60 (29s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_61 skipping SLOW test 61 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 62: Project inherit should be only changed by root ========================================================== 10:12:39 (1713535959) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [-p] [/mnt/lustre/d62.sanity-quota/] lfs project -s /mnt/lustre/d62.sanity-quota/ running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [chattr] [-P] [/mnt/lustre/d62.sanity-quota/] chattr: Operation not permitted while setting flags on /mnt/lustre/d62.sanity-quota/ Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 62 (17s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_63 skipping excluded test 63 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 64: lfs project on non dir/files should succeed ========================================================== 10:12:58 (1713535978) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 64 (29s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity-quota test_65 skipping excluded test 65 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 66: nonroot user can not change project state in default ========================================================== 10:13:30 (1713536010) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 mdt.lustre-MDT0000.enable_chprojid_gid=0 lfs project -sp 1000 /mnt/lustre/d66.sanity-quota/foo running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [mkdir] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [0] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-C] [/mnt/lustre/d66.sanity-quota/foo/foo] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/foo': Operation not permitted lfs project -C /mnt/lustre/d66.sanity-quota/foo/foo mdt.lustre-MDT0000.enable_chprojid_gid=-1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/foo] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-rC] [/mnt/lustre/d66.sanity-quota/foo/] running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [lfs] [project] [-p] [1000] [/mnt/lustre/d66.sanity-quota/foo/bar] lfs: failed to set xattr for '/mnt/lustre/d66.sanity-quota/foo/bar': Operation not permitted lfs project -p 1000 /mnt/lustre/d66.sanity-quota/foo/bar mdt.lustre-MDT0000.enable_chprojid_gid=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 66 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 67: quota pools recalculation ======= 10:13:58 (1713536038) SKIP: sanity-quota test_67 ZFS grants some block space together with inode SKIP 67 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 68: slave number in quota pool changed after each add/remove OST ========================================================== 10:14:02 (1713536042) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 nr result 3 Creating new pool oleg451-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Adding targets to pool oleg451-server: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 17 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Removing lustre-OST0000_UUID from qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Waiting 90s for '' Removing lustre-OST0001_UUID from qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 68 (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 69: EDQUOT at one of pools shouldn't affect DOM ========================================================== 10:14:43 (1713536083) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Waiting 90s for 'ugp' Creating new pool oleg451-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 User quota (block hardlimit:200 MB) User quota (block hardlimit:10 MB) running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 8.80089 s, 59.6 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 8.78882 s, 59.7 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.486189 s, 21.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0] [count=10] [seek=10] dd: error writing '/mnt/lustre/d69.sanity-quota/f69.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0610023 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 8.8527 s, 59.2 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d69.sanity-quota/dom0/f1] [bs=1K] [count=512] [seek=512] [oflag=sync] 512+0 records in 512+0 records out 524288 bytes (524 kB) copied, 8.79758 s, 59.6 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 69 (82s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70a: check lfs setquota/quota with a pool option ========================================================== 10:16:08 (1713536168) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID ' hard limit 20480 limit 20 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 20480 - 0 0 0 - Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 70a (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 70b: lfs setquota pool works properly ========================================================== 10:16:38 (1713536198) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed PASS 70b (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71a: Check PFL with quota pools ===== 10:16:56 (1713536216) SKIP: sanity-quota test_71a ZFS grants some block space together with inode SKIP 71a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 71b: Check SEL with quota pools ===== 10:16:59 (1713536219) SKIP: sanity-quota test_71b ZFS grants some block space together with inode SKIP 71b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 72: lfs quota --pool prints only pool's OSTs ========================================================== 10:17:02 (1713536222) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:50 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 used 0 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.265502 s, 19.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.2308 s, 22.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d72.sanity-quota/f72.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0737006 s, 0.0 kB/s used 10240 Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 72 (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73a: default limits at OST Pool Quotas ========================================================== 10:17:54 (1713536274) Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' LIMIT=20480 TESTFILE=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0 qdtype=-U qh=-B qid=quota_usr qprjid=1000 qres_type=data qs=-b qtype=-u sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 set to use default quota lfs setquota: '-d' deprecated, use '-D' or '--default' set default quota get default quota Disk default usr quota: Filesystem bquota blimit bgrace iquota ilimit igrace /mnt/lustre 0 0 10 0 0 10 Test not out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=10] [oflag=sync] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.01525 s, 10.3 MB/s Test out of quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 5.00916 s, 3.8 MB/s Increase default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 3.73763 s, 11.2 MB/s Set quota to override default quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] dd: error writing '/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0': Disk quota exceeded 19+0 records in 18+0 records out 18878464 bytes (19 MB) copied, 1.96152 s, 9.6 MB/s Set to use default quota again lfs setquota: '-d' deprecated, use '-D' or '--default' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73a.sanity-quota/f73a.sanity-quota-0] [count=40] [oflag=sync] 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 3.83229 s, 10.9 MB/s Cleanup sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed PASS 73a (97s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 73b: default OST Pool Quotas limit for new user ========================================================== 10:19:33 (1713536373) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg451-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 set default quota for qpool1 Write from user that hasn't lqe running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d73b.sanity-quota/f73b.sanity-quota-1] [count=10] 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.363659 s, 28.8 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 73b (43s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 74: check quota pools per user ====== 10:20:18 (1713536418) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg451-server: Pool lustre.qpool2 created Adding targets to pool oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting 90s for 'lustre-OST0001_UUID ' pool limit for qpool1 10240 pool limit for qpool2 51200 Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg451-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 74 (45s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 75: nodemap squashed root respects quota enforcement ========================================================== 10:21:05 (1713536465) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 On MGS 192.168.204.151, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.204.151, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.204.151, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.204.151, default.squash_uid = nodemap.default.squash_uid=60000 waiting 10 secs for sync 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.406847 s, 25.8 MB/s Write to exceed soft limit 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.228091 s, 44.9 kB/s mmap write when over soft limit sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Write... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.491375 s, 21.3 MB/s Write out of block quota ... 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.477425 s, 22.0 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/f75.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0707989 s, 0.0 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0974323 s, 10.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0882087 s, 11.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0881636 s, 11.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0886034 s, 11.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0887036 s, 11.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0888974 s, 11.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0927365 s, 11.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0891882 s, 11.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0900194 s, 11.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0880377 s, 11.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0858374 s, 12.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0841522 s, 12.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0836736 s, 12.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0837866 s, 12.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0909348 s, 11.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0943576 s, 11.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0903684 s, 11.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0886948 s, 11.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0885657 s, 11.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0866823 s, 12.1 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-20': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0649651 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-21': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0626903 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-22': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0614339 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-23': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0634052 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-24': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0635755 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-25': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0627944 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-26': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0622357 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-27': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0635722 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-28': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0638795 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-29': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.065841 s, 0.0 kB/s 9+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 1.30945 s, 7.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0951549 s, 11.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0841927 s, 12.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0838496 s, 12.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0834251 s, 12.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0874636 s, 12.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0871998 s, 12.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0871292 s, 12.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0896719 s, 11.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0906668 s, 11.6 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-9': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.064536 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-10': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0637271 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-11': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0636413 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-12': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0635828 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-13': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0615944 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-14': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0608639 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-15': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0632591 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-16': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0640259 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-17': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.064007 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-18': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0629854 s, 0.0 kB/s dd: error writing '/mnt/lustre/d75.sanity-quota_dom/f75.sanity-quota-19': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0675265 s, 0.0 kB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0877637 s, 11.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0864661 s, 12.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0945561 s, 11.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0936826 s, 11.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0893521 s, 11.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0933138 s, 11.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0905133 s, 11.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0903386 s, 11.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0895928 s, 11.7 MB/s dd: error writing '/mnt/lustre/d75.sanity-quota/file': Disk quota exceeded 10+0 records in 9+0 records out 9437184 bytes (9.4 MB) copied, 0.5333 s, 17.7 MB/s On MGS 192.168.204.151, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.204.151, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.204.151, active = nodemap.active=0 waiting 10 secs for sync Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 75 (168s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 76: project ID 4294967295 should be not allowed ========================================================== 10:23:55 (1713536635) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Invalid project ID: 4294967295 Change or list project attribute for specified file or directory. usage: project [-d|-r] list project ID and flags on file(s) or directories project [-p id] [-s] [-r] set project ID and/or inherit flag for specified file(s) or directories project -c [-d|-r [-p id] [-0]] check project ID and flags on file(s) or directories, print outliers project -C [-d|-r] [-k] clear the project inherit flag and ID on the file or directory Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 76 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 77: lfs setquota should fail in Lustre mount with 'ro' ========================================================== 10:24:24 (1713536664) Starting client: oleg451-client.virtnet: -o ro oleg451-server@tcp:/lustre /mnt/lustre2 lfs setquota: quotactl failed: Read-only file system setquota failed: Read-only file system PASS 77 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78A: Check fallocate increase quota usage ========================================================== 10:24:29 (1713536669) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity-quota test_78A need >= 2.13.57 and ldiskfs for fallocate SKIP 78A (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 78a: Check fallocate increase projectid usage ========================================================== 10:24:32 (1713536672) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity-quota test_78a need >= 2.13.57 and ldiskfs for fallocate SKIP 78a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 79: access to non-existed dt-pool/info doesn't cause a panic ========================================================== 10:24:36 (1713536676) /tmp/f79.sanity-quota Creating new pool oleg451-server: Pool lustre.qpool1 created Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed PASS 79 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 80: check for EDQUOT after OST failover ========================================================== 10:24:48 (1713536688) SKIP: sanity-quota test_80 ZFS grants some block space together with inode SKIP 80 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 81: Race qmt_start_pool_recalc with qmt_pool_free ========================================================== 10:24:51 (1713536691) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:20 MB) Creating new pool oleg451-server: Pool lustre.qpool1 created Waiting 90s for '' fail_loc=0x80000A07 fail_val=10 Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 pdsh@oleg451-client: oleg451-client: ssh exited with exit code 5 Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 81 (55s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 82: verify more than 8 qids for single operation ========================================================== 10:25:48 (1713536748) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 82 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 83: Setting default quota shouldn't affect grace time ========================================================== 10:26:08 (1713536768) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 ttt1 ttt2 ttt3 ttt4 ttt5 Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 83 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 84: Reset quota should fix the insane granted quota ========================================================== 10:26:27 (1713536787) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg451-server: Pool lustre.qpool1 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 10485760 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 osd-zfs.lustre-OST0000.quota_slave.force_reint=1 0 /mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 66 0x42 0x240000400 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=60] [conv=nocreat] [oflag=direct] 60+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 3.21892 s, 19.5 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 10485760 - 2 0 0 - lustre-MDT0000_UUID 13 - 0 - 2 - 0 - lustre-OST0000_UUID 61445 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 1048576 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 1048576 fail_val=0 fail_loc=0xa08 fail_val=0 fail_loc=0xa08 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 0 - 2 0 0 - lustre-MDT0000_UUID 13 - 0 - 2 - 0 - lustre-OST0000_UUID 61445 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 18446744073707374604 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 18446744073707374604 fail_val=0 fail_loc=0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 0 - 2 0 0 - lustre-MDT0000_UUID 13 - 0 - 2 - 0 - lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 5242880 - 2 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 61445 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 61458 0 102400 - 2 0 0 - lustre-MDT0000_UUID 13* - 13 - 2 - 0 - lustre-OST0000_UUID 61445* - 61445 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 61445 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] dd: error writing '/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1': Disk quota exceeded 100+0 records in 99+0 records out 103809024 bytes (104 MB) copied, 5.51621 s, 18.8 MB/s Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 101395 0 307200 - 2 0 0 - lustre-MDT0000_UUID 13* - 13 - 2 - 0 - lustre-OST0000_UUID 101382 - 102387 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 102387 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d84.sanity-quota/dir1/f84.sanity-quota-1] [count=200] [conv=nocreat] [oflag=direct] 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 10.7873 s, 19.4 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 84 (79s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 85: do not hung at write with the least_qunit ========================================================== 10:27:48 (1713536868) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg451-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Creating new pool oleg451-server: Pool lustre.qpool2 created Adding targets to pool oleg451-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0] [count=10] dd: error writing '/mnt/lustre/d85.sanity-quota/f85.sanity-quota-0': Disk quota exceeded 4+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.18494 s, 17.0 MB/s Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg451-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg451-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg451-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg451-server: Pool lustre.qpool2 destroyed Waiting 90s for 'foo' Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 85 (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity-quota test 86: Pre-acquired quota should be released if quota is over limit ========================================================== 10:28:43 (1713536923) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2517 (time 1713536941.55 total 10.00 last 251.60) total: 5000 create in 19.72 seconds: 253.49 ops/second sleep 5 for ZFS zfs running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2516 (time 1713537012.16 total 10.00 last 251.59) total: 5000 create in 19.90 seconds: 251.27 ops/second sleep 5 for ZFS zfs running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second lfs project -sp 1000 /mnt/lustre/d86.sanity-quota running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile-] [5000] - create 2531 (time 1713537082.81 total 10.00 last 253.01) total: 5000 create in 19.86 seconds: 251.82 ops/second sleep 5 for ZFS zfs running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile2-] [10] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile2-0) error: Disk quota exceeded total: 0 create in 0.00 seconds: 0.00 ops/second running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [createmany] [-m] [/mnt/lustre/d86.sanity-quota/test_dir/tfile3-] [30] mknod(/mnt/lustre/d86.sanity-quota/test_dir/tfile3-0) error: Disk quota exceeded total: 0 create in 0.01 seconds: 0.00 ops/second Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 86 (230s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity-quota test complete, duration 6175 sec ========= 10:32:35 (1713537155) === sanity-quota: start cleanup 10:32:36 (1713537156) === === sanity-quota: finish cleanup 10:32:36 (1713537156) ===