== sanity-quota test 1f: Quota pools: correct qunit after removing/adding OST ========================================================== 10:57:24 (1713279444) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 User quota (block hardlimit:200 MB) Creating new pool oleg108-server: Pool lustre.qpool1 created Adding targets to pool oleg108-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID ' Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.165267 s, 31.7 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.176471 s, 29.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0559854 s, 0.0 kB/s Removing lustre-OST0000_UUID from qpool1 oleg108-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 pdsh@oleg108-client: oleg108-server: ssh exited with exit code 1 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Adding targets to pool oleg108-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 Write... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.200042 s, 26.2 MB/s Write out of block quota ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=5] [seek=5] 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.157661 s, 33.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0] [count=1] [seek=10] dd: error writing '/mnt/lustre/d1f.sanity-quota//f1f.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0821392 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg108-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg108-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete