== conf-sanity test 33c: Mount ost with a large index number ========================================================== 18:52:52 (1713480772) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg454-server' oleg454-server: oleg454-server.virtnet: executing load_modules_local oleg454-server: Loading modules from /home/green/git/lustre-release/lustre oleg454-server: detected 4 online CPUs by sysfs oleg454-server: Force libcfs to create 2 CPU partitions oleg454-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg454-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: Parameters: mgsnode=192.168.204.154@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity mkfs_cmd = zpool create -f -O canmount=off lustre-mdt1_2 /tmp/lustre-mdt1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-mdt1_2/mdt1_2 xattr=sa dnodesize=auto Writing lustre-mdt1_2/mdt1_2 properties lustre:mgsnode=192.168.204.154@tcp lustre:sys.timeout=20 lustre:mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity lustre:version=1 lustre:flags=101 lustre:index=0 lustre:fsname=lustre lustre:svname=lustre:MDT0000 Permanent disk data: Target: lustre:OST07c6 Index: 1990 Lustre FS: lustre Mount type: zfs Flags: 0x62 (OST first_time update ) Persistent mount opts: Parameters: sys.timeout=20 mgsnode=192.168.204.154@tcp:192.168.204.154@tcp autodegrade=on mkfs_cmd = zpool create -f -O canmount=off lustre-ost1_2 /tmp/lustre-ost1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-ost1_2/ost1_2 xattr=sa dnodesize=auto recordsize=1M Writing lustre-ost1_2/ost1_2 properties lustre:sys.timeout=20 lustre:mgsnode=192.168.204.154@tcp:192.168.204.154@tcp lustre:autodegrade=on lustre:version=1 lustre:flags=98 lustre:index=1990 lustre:fsname=lustre lustre:svname=lustre:OST07c6 Starting fs2mds: -o localrecov lustre-mdt1_2/mdt1_2 /mnt/lustre-fs2mds oleg454-server: oleg454-server.virtnet: executing set_default_debug -1 all pdsh@oleg454-client: oleg454-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1_2/mdt1_2 Started lustre-MDT0000 Starting fs2ost: -o localrecov lustre-ost1_2/ost1_2 /mnt/lustre-fs2ost seq.cli-lustre-OST07c6-super.width=65536 oleg454-server: oleg454-server.virtnet: executing set_default_debug -1 all pdsh@oleg454-client: oleg454-server: ssh exited with exit code 1 Started lustre-OST07c6 mount lustre on /mnt/lustre..... Starting client: oleg454-client.virtnet: -o user_xattr,flock oleg454-server@tcp:/lustre /mnt/lustre Creating new pool oleg454-server: Pool lustre.qpool1 created Waiting 90s for '' Adding targets to pool oleg454-server: OST lustre-OST07c6_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.664158 s, 30.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0435143 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.72525 s, 27.5 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0454082 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0451039 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.711222 s, 28.0 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0409842 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0454769 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.708293 s, 28.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0347611 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg454-server: OST lustre-OST07c6_UUID removed from pool lustre.qpool1 oleg454-server: Pool lustre.qpool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg454-client.virtnet /mnt/lustre (opts:) Stopping /mnt/lustre-fs2ost (opts:-f) on oleg454-server Stopping /mnt/lustre-fs2mds (opts:-f) on oleg454-server unloading modules on: 'oleg454-server' oleg454-server: oleg454-server.virtnet: executing unload_modules_local modules unloaded.