== sanity-quota test 3c: Quota pools: check block soft limit on different pools ========================================================== 11:14:12 (1713280452) limit 4 limit2 8 glbl_limit 12 grace1 70 grace2 60 glbl_grace 80 User quota in qpool2(soft:8 MB grace:60 seconds) Creating new pool oleg108-server: Pool lustre.qpool1 created Adding targets to pool oleg108-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg108-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Creating new pool oleg108-server: Pool lustre.qpool2 created Adding targets to pool oleg108-server: OST lustre-OST0000_UUID added to pool lustre.qpool2 oleg108-server: OST lustre-OST0001_UUID added to pool lustre.qpool2 sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write up to soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.354038 s, 23.7 MB/s Write to exceed soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=8192] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.212561 s, 48.2 kB/s mmap write when over soft limit running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [multiop] [/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0.mmap] [OT40960SMW] Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9285 12288 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9220 - 10244 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 10374 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 9285 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 9220 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write before timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=9216] 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 0.201825 s, 50.7 kB/s Quota info for qpool2: Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10309* 8192 0 1m 0 0 0 - Grace time is 1m Sleep through grace ... ...sleep 65 seconds Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10311 12288 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 10244 - 11268 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 11398 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 10311 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 10244 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write after timer goes off running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=4096] [seek=10240] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 2+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0290522 s, 35.2 kB/s Write after cancel lru locks running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [bs=1K] [count=10] [seek=14336] dd: error writing '/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00713854 s, 0.0 kB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11335 12288 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 11268* - 11268 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 11398 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11335 0 0 - 2 0 0 - lustre-MDT0000_UUID 2 - 0 - 2 - 0 - lustre-OST0000_UUID 11268 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Unlink file to stop timer sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 12288 0 - 1 0 0 - lustre-MDT0000_UUID 1* - 1 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 130 - - - - - Total allocated inode limit: 0, total allocated block limit: 130 Disk quotas for grp quota_usr (gid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 66 0 0 - 1 0 0 - lustre-MDT0000_UUID 1 - 0 - 1 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 66 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Disk quotas for prj 1000 (pid 1000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 0 0 0 - 0 0 0 - lustre-MDT0000_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 Block grace time: 1m20s; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Block grace time: 1w; Inode grace time: 1w Write ... running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d3c.sanity-quota/f3c.sanity-quota-0] [count=8] 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.335318 s, 25.0 MB/s Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs Waiting for MDT destroys to complete Destroy the created pools: qpool1,qpool2 lustre.qpool1 oleg108-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg108-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg108-server: Pool lustre.qpool1 destroyed lustre.qpool2 oleg108-server: OST lustre-OST0000_UUID removed from pool lustre.qpool2 oleg108-server: OST lustre-OST0001_UUID removed from pool lustre.qpool2 oleg108-server: Pool lustre.qpool2 destroyed