== conf-sanity test 33c: Mount ost with a large index number ========================================================== 10:32:48 (1713364368) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg224-server' oleg224-server: oleg224-server.virtnet: executing load_modules_local oleg224-server: Loading modules from /home/green/git/lustre-release/lustre oleg224-server: detected 4 online CPUs by sysfs oleg224-server: Force libcfs to create 2 CPU partitions oleg224-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg224-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg224-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: Parameters: mgsnode=192.168.202.124@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity mkfs_cmd = zpool create -f -O canmount=off lustre-mdt1_2 /tmp/lustre-mdt1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-mdt1_2/mdt1_2 xattr=sa dnodesize=auto Writing lustre-mdt1_2/mdt1_2 properties lustre:mgsnode=192.168.202.124@tcp lustre:sys.timeout=20 lustre:mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity lustre:version=1 lustre:flags=101 lustre:index=0 lustre:fsname=lustre lustre:svname=lustre:MDT0000 Permanent disk data: Target: lustre:OST07c6 Index: 1990 Lustre FS: lustre Mount type: zfs Flags: 0x62 (OST first_time update ) Persistent mount opts: Parameters: sys.timeout=20 mgsnode=192.168.202.124@tcp:192.168.202.124@tcp autodegrade=on mkfs_cmd = zpool create -f -O canmount=off lustre-ost1_2 /tmp/lustre-ost1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-ost1_2/ost1_2 xattr=sa dnodesize=auto recordsize=1M Writing lustre-ost1_2/ost1_2 properties lustre:sys.timeout=20 lustre:mgsnode=192.168.202.124@tcp:192.168.202.124@tcp lustre:autodegrade=on lustre:version=1 lustre:flags=98 lustre:index=1990 lustre:fsname=lustre lustre:svname=lustre:OST07c6 Starting fs2mds: -o localrecov lustre-mdt1_2/mdt1_2 /mnt/lustre-fs2mds oleg224-server: oleg224-server.virtnet: executing set_default_debug -1 all pdsh@oleg224-client: oleg224-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1_2/mdt1_2 Started lustre-MDT0000 Starting fs2ost: -o localrecov lustre-ost1_2/ost1_2 /mnt/lustre-fs2ost seq.cli-lustre-OST07c6-super.width=65536 oleg224-server: oleg224-server.virtnet: executing set_default_debug -1 all pdsh@oleg224-client: oleg224-server: ssh exited with exit code 1 Started lustre-OST07c6 mount lustre on /mnt/lustre..... Starting client: oleg224-client.virtnet: -o user_xattr,flock oleg224-server@tcp:/lustre /mnt/lustre Creating new pool oleg224-server: Pool lustre.qpool1 created Adding targets to pool oleg224-server: OST lustre-OST07c6_UUID added to pool lustre.qpool1 running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.709054 s, 28.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0381488 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.778128 s, 25.6 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0434827 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.065152 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.656736 s, 30.3 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0384371 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0641123 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.646068 s, 30.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0415125 s, 0.0 kB/s Destroy the created pools: qpool1 lustre.qpool1 oleg224-server: OST lustre-OST07c6_UUID removed from pool lustre.qpool1 oleg224-server: Pool lustre.qpool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg224-client.virtnet /mnt/lustre (opts:) Stopping /mnt/lustre-fs2ost (opts:-f) on oleg224-server Stopping /mnt/lustre-fs2mds (opts:-f) on oleg224-server unloading modules on: 'oleg224-server' oleg224-server: oleg224-server.virtnet: executing unload_modules_local modules unloaded.