== conf-sanity test 33c: Mount ost with a large index number ========================================================== 20:18:31 (1713485911) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg414-server' oleg414-server: oleg414-server.virtnet: executing load_modules_local oleg414-server: Loading modules from /home/green/git/lustre-release/lustre oleg414-server: detected 4 online CPUs by sysfs oleg414-server: Force libcfs to create 2 CPU partitions oleg414-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg414-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg414-server: quota/lquota options: 'hash_lqs_cur_bits=3' pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.114@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity formatting backing filesystem ldiskfs on /dev/loop0 target name lustre:MDT0000 kilobytes 200000 options -J size=8 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -J size=8 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 Permanent disk data: Target: lustre:OST07c6 Index: 1990 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: sys.timeout=20 mgsnode=192.168.204.114@tcp:192.168.204.114@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre:OST07c6 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST07c6 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 Starting fs2mds: -o localrecov /dev/mapper/fs2mds_flakey /mnt/lustre-fs2mds oleg414-server: oleg414-server.virtnet: executing set_default_debug -1 all pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 Commit the device label on /tmp/lustre-mdt1_2 Started lustre-MDT0000 pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 Starting fs2ost: -o localrecov /dev/mapper/fs2ost_flakey /mnt/lustre-fs2ost seq.cli-lustre-OST07c6-super.width=65536 oleg414-server: oleg414-server.virtnet: executing set_default_debug -1 all pdsh@oleg414-client: oleg414-server: ssh exited with exit code 1 Started lustre-OST07c6 mount lustre on /mnt/lustre..... Starting client: oleg414-client.virtnet: -o user_xattr,flock oleg414-server@tcp:/lustre /mnt/lustre Creating new pool oleg414-server: Pool lustre.qpool1 created Adding targets to pool oleg414-server: OST lustre-OST07c6_UUID added to pool lustre.qpool1 Waiting 90s for 'lustre-OST07c6_UUID ' Updated after 2s: want 'lustre-OST07c6_UUID ' got 'lustre-OST07c6_UUID ' running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.542995 s, 36.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0274641 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.508184 s, 39.2 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0257684 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.477779 s, 41.7 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0278904 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.48421 s, 41.1 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.465955 s, 42.8 MB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0264222 s, 0.0 kB/s running as uid/gid/euid/egid 60000/60000/60000/60000, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/d33c.conf-sanity/f1] [bs=1M] [count=30] [oflag=direct] dd: error writing '/mnt/lustre/d33c.conf-sanity/f1': Disk quota exceeded 20+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 0.496091 s, 40.2 MB/s Destroy the created pools: qpool1 lustre.qpool1 oleg414-server: OST lustre-OST07c6_UUID removed from pool lustre.qpool1 oleg414-server: Pool lustre.qpool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg414-client.virtnet /mnt/lustre (opts:) Stopping /mnt/lustre-fs2ost (opts:-f) on oleg414-server Stopping /mnt/lustre-fs2mds (opts:-f) on oleg414-server unloading modules on: 'oleg414-server' oleg414-server: oleg414-server.virtnet: executing unload_modules_local oleg414-server: [ 324.948066] LustreError: 995:0:(class_obd.c:883:obdclass_exit()) obd_memory max: 241569483, leaked: 88 oleg414-server: oleg414-server: mv: cannot stat '/tmp/debug': No such file or directory oleg414-server: Memory leaks detected pdsh@oleg414-client: oleg414-server: ssh exited with exit code 254 modules unloaded.