== sanity-quota test 80: check for EDQUOT after OST failover ========================================================== 11:38:43 (1713368323) Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Creating new pool oleg314-server: Pool lustre.qpool1 created Adding targets to pool oleg314-server: OST lustre-OST0000_UUID added to pool lustre.qpool1 oleg314-server: OST lustre-OST0001_UUID added to pool lustre.qpool1 /mnt/lustre/d80.sanity-quota/dir1 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 1 /mnt/lustre/d80.sanity-quota/dir2 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: 0 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 8 0 102400 - 2 0 0 - lustre-MDT0000_UUID 8 - 16384 - 2 - 0 - lustre-MDT0001_UUID 0 - 0 - 0 - 0 - lustre-OST0000_UUID 0 - 0 - - - - - lustre-OST0001_UUID 0 - 0 - - - - - Total allocated inode limit: 0, total allocated block limit: 0 fail_loc=0xa06 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir2/f80.sanity-quota-0] [count=3] 3+0 records in 3+0 records out 3145728 bytes (3.1 MB) copied, 0.058645 s, 53.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-2] [count=7] 7+0 records in 7+0 records out 7340032 bytes (7.3 MB) copied, 0.09902 s, 74.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-1] [count=1] [oflag=direct] 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0179304 s, 58.5 MB/s Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 11272* 0 10240 - 5 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 8192* - 5120 - - - - - Total allocated inode limit: 0, total allocated block limit: 9216 Stopping /mnt/lustre-ost2 (opts:) on oleg314-server fail_loc=0 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg314-server: oleg314-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg314-client: oleg314-server: ssh exited with exit code 1 Started lustre-OST0001 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 Disk quotas for usr quota_usr (uid 60000): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre 4104 0 10240 - 4 0 0 - Pool: lustre.qpool1 lustre-OST0000_UUID 3072 - 4096 - - - - - lustre-OST0001_UUID 1024 - 2048 - - - - - Total allocated inode limit: 0, total allocated block limit: 6144 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [bs=1M] [of=/mnt/lustre/d80.sanity-quota/dir1/f80.sanity-quota-0] [count=2] [oflag=direct] 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0337293 s, 62.2 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg314-server: OST lustre-OST0000_UUID removed from pool lustre.qpool1 oleg314-server: OST lustre-OST0001_UUID removed from pool lustre.qpool1 oleg314-server: Pool lustre.qpool1 destroyed Delete files... Wait for unlink objects finished... Waiting for MDT destroys to complete