== conf-sanity test 33c: Mount ost with a large index number ========================================================== 04:42:44 (1713429764) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg208-server' oleg208-server: oleg208-server.virtnet: executing load_modules_local oleg208-server: Loading modules from /home/green/git/lustre-release/lustre oleg208-server: detected 4 online CPUs by sysfs oleg208-server: Force libcfs to create 2 CPU partitions oleg208-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg208-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: Parameters: mgsnode=192.168.202.108@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity mkfs_cmd = zpool create -f -O canmount=off lustre-mdt1_2 /tmp/lustre-mdt1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-mdt1_2/mdt1_2 xattr=sa dnodesize=auto Writing lustre-mdt1_2/mdt1_2 properties lustre:mgsnode=192.168.202.108@tcp lustre:sys.timeout=20 lustre:mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity lustre:version=1 lustre:flags=101 lustre:index=0 lustre:fsname=lustre lustre:svname=lustre:MDT0000 Permanent disk data: Target: lustre:OST07c6 Index: 1990 Lustre FS: lustre Mount type: zfs Flags: 0x62 (OST first_time update ) Persistent mount opts: Parameters: sys.timeout=20 mgsnode=192.168.202.108@tcp:192.168.202.108@tcp autodegrade=on mkfs_cmd = zpool create -f -O canmount=off lustre-ost1_2 /tmp/lustre-ost1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-ost1_2/ost1_2 xattr=sa dnodesize=auto recordsize=1M Writing lustre-ost1_2/ost1_2 properties lustre:sys.timeout=20 lustre:mgsnode=192.168.202.108@tcp:192.168.202.108@tcp lustre:autodegrade=on lustre:version=1 lustre:flags=98 lustre:index=1990 lustre:fsname=lustre lustre:svname=lustre:OST07c6 Starting fs2mds: -o localrecov lustre-mdt1_2/mdt1_2 /mnt/lustre-fs2mds oleg208-server: oleg208-server.virtnet: executing set_default_debug -1 all pdsh@oleg208-client: oleg208-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1_2/mdt1_2 Started lustre-MDT0000 Starting fs2ost: -o localrecov lustre-ost1_2/ost1_2 /mnt/lustre-fs2ost seq.cli-lustre-OST07c6-super.width=65536 oleg208-server: oleg208-server.virtnet: executing set_default_debug -1 all pdsh@oleg208-client: oleg208-server: ssh exited with exit code 1 Started lustre-OST07c6 mount lustre on /mnt/lustre..... Starting client: oleg208-client.virtnet: -o user_xattr,flock oleg208-server@tcp:/lustre /mnt/lustre Creating new pool oleg208-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg208-server: OST lustre-OST07c6_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.966471 s, 20.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.05385 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.869735 s, 22.9 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0510768 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0628472 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.846249 s, 23.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0501368 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0632164 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 1.02371 s, 19.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0563636 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg208-server: OST lustre-OST07c6_UUID removed from pool lustre.qpool1 oleg208-server: Pool lustre.qpool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg208-client.virtnet /mnt/lustre (opts:) Stopping /mnt/lustre-fs2ost (opts:-f) on oleg208-server Stopping /mnt/lustre-fs2mds (opts:-f) on oleg208-server unloading modules on: 'oleg208-server' oleg208-server: oleg208-server.virtnet: executing unload_modules_local modules unloaded.