== conf-sanity test 111: Adding large_dir with over 2GB directory ========================================================== 19:00:57 (1713481257) oleg318-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity oleg318-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity umount lustre on /mnt/lustre..... stop ost1 service on oleg318-server stop mds service on oleg318-server stop mds service on oleg318-server LNET unconfigure error 22: (null) unloading modules on: 'oleg318-server' oleg318-server: oleg318-server.virtnet: executing unload_modules_local oleg318-server: LNET unconfigure error 22: (null) modules unloaded. MDT params: --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity --backfstype=ldiskfs --device-size=2400000 --mkfsoptions=\"-O large_dir -i 1048576 -b 4096 -E lazy_itable_init\" --reformat /dev/mapper/mds1_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg318-server' oleg318-server: oleg318-server.virtnet: executing load_modules_local oleg318-server: Loading modules from /home/green/git/lustre-release/lustre oleg318-server: detected 4 online CPUs by sysfs oleg318-server: Force libcfs to create 2 CPU partitions oleg318-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg318-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 2400000 options -i 1048576 -b 4096 -J size=93 -I 1024 -q -O large_dir,uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -i 1048576 -b 4096 -J size=93 -I 1024 -q -O large_dir,uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 2400000k Writing CONFIGS/mountdata Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg318-server: oleg318-server.virtnet: executing set_default_debug -1 all pdsh@oleg318-client: oleg318-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 OST params: --mgsnode=oleg318-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O large_dir -b 4096 -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.118@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O large_dir,uninit_bg,extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O large_dir,uninit_bg,extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg318-server: oleg318-server.virtnet: executing set_default_debug -1 all pdsh@oleg318-client: oleg318-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 mount lustre on /mnt/lustre..... Starting client: oleg318-client.virtnet: -o user_xattr,flock oleg318-server@tcp:/lustre /mnt/lustre Starting client oleg318-client.virtnet: -o user_xattr,flock oleg318-server@tcp:/lustre /mnt/lustre Started clients oleg318-client.virtnet: 192.168.203.118@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) mount lustre on /mnt/lustre2..... Starting client: oleg318-client.virtnet: -o user_xattr,flock oleg318-server@tcp:/lustre /mnt/lustre2 Starting client oleg318-client.virtnet: -o user_xattr,flock oleg318-server@tcp:/lustre /mnt/lustre2 Started clients oleg318-client.virtnet: 192.168.203.118@tcp:/lustre on /mnt/lustre2 type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2280828 1696 2159132 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 142216 1388 126828 2% /mnt/lustre[OST:0] filesystem_summary: 142216 1388 126828 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 2280 272 2008 12% /mnt/lustre[MDT:0] lustre-OST0000_UUID 50000 302 49698 1% /mnt/lustre[OST:0] filesystem_summary: 2280 272 2008 12% /mnt/lustre creating 60000 hardlinks to oleg318-client.virtnet-0 creating 60000 hardlinks to oleg318-client.virtnet-1 waiting for PIDs 26706 26720 to complete - link 2262 (time 1713481294.57 total 10.00 last 226.11) - link 4464 (time 1713481304.57 total 20.01 last 220.17) - link 6665 (time 1713481314.58 total 30.01 last 220.07) - link 8854 (time 1713481324.58 total 40.01 last 218.86) - link 10000 (time 1713481329.87 total 45.30 last 216.70) - link 12169 (time 1713481339.87 total 55.30 last 216.90) - link 14433 (time 1713481349.87 total 65.30 last 226.36) - link 16677 (time 1713481359.87 total 75.30 last 224.34) - link 19007 (time 1713481369.87 total 85.30 last 232.96) - link 20000 (time 1713481374.14 total 89.57 last 232.71) - link 22319 (time 1713481384.14 total 99.57 last 231.89) - link 24664 (time 1713481394.14 total 109.57 last 234.50) - link 26945 (time 1713481404.14 total 119.57 last 228.06) - link 29038 (time 1713481414.15 total 129.58 last 209.20) - link 30000 (time 1713481418.35 total 133.78 last 228.82) - link 32282 (time 1713481428.35 total 143.78 last 228.13) - link 34550 (time 1713481438.36 total 153.79 last 226.72) - link 36843 (time 1713481448.36 total 163.79 last 229.27) - link 39131 (time 1713481458.36 total 173.79 last 228.70) - link 40000 (time 1713481462.19 total 177.62 last 227.26) - link 42302 (time 1713481472.19 total 187.62 last 230.12) - link 44541 (time 1713481482.19 total 197.62 last 223.89) - link 46097 (time 1713481492.20 total 207.63 last 155.53) - link 47530 (time 1713481502.20 total 217.63 last 143.27) - link 49027 (time 1713481512.20 total 227.63 last 149.67) - link 50000 (time 1713481517.00 total 232.43 last 202.54) - link 51926 (time 1713481527.01 total 242.44 last 192.54) - link 53810 (time 1713481537.01 total 252.44 last 188.37) - link 55777 (time 1713481547.01 total 262.44 last 196.64) - link 57859 (time 1713481557.02 total 272.45 last 208.12) - link 59845 (time 1713481567.02 total 282.45 last 198.54) total: 60000 link in 283.17 seconds: 211.89 ops/second - link 2264 (time 1713481294.49 total 10.00 last 226.39) - link 4466 (time 1713481304.49 total 20.00 last 220.19) - link 6668 (time 1713481314.49 total 30.00 last 220.16) - link 8855 (time 1713481324.49 total 40.00 last 218.69) - link 10000 (time 1713481329.79 total 45.31 last 215.88) - link 12161 (time 1713481339.79 total 55.31 last 216.08) - link 14427 (time 1713481349.80 total 65.31 last 226.55) - link 16677 (time 1713481359.80 total 75.31 last 224.98) - link 19003 (time 1713481369.80 total 85.31 last 232.58) - link 20000 (time 1713481374.08 total 89.59 last 232.83) - link 22322 (time 1713481384.08 total 99.60 last 232.10) - link 24665 (time 1713481394.09 total 109.60 last 234.27) - link 26946 (time 1713481404.09 total 119.60 last 228.08) - link 29033 (time 1713481414.09 total 129.60 last 208.63) - link 30000 (time 1713481418.28 total 133.80 last 230.69) - link 32273 (time 1713481428.28 total 143.80 last 227.22) - link 34542 (time 1713481438.29 total 153.80 last 226.89) - link 36831 (time 1713481448.29 total 163.80 last 228.90) - link 39116 (time 1713481458.29 total 173.80 last 228.43) - link 40000 (time 1713481462.20 total 177.71 last 226.02) - link 42299 (time 1713481472.20 total 187.72 last 229.86) - link 44545 (time 1713481482.21 total 197.72 last 224.52) - link 46096 (time 1713481492.21 total 207.72 last 155.09) - link 47521 (time 1713481502.21 total 217.72 last 142.47) - link 49015 (time 1713481512.21 total 227.72 last 149.38) - link 50000 (time 1713481517.10 total 232.61 last 201.52) - link 51923 (time 1713481527.10 total 242.61 last 192.26) - link 53815 (time 1713481537.10 total 252.62 last 189.16) - link 55787 (time 1713481547.10 total 262.62 last 197.14) - link 57859 (time 1713481557.11 total 272.62 last 207.10) - link 59843 (time 1713481567.11 total 282.63 last 198.36) total: 60000 link in 283.33 seconds: 211.77 ops/second estimate 11885s left after 120000 files / 283s umount lustre on /mnt/lustre2..... Stopping client oleg318-client.virtnet /mnt/lustre2 (opts:-f) umount lustre on /mnt/lustre..... Stopping client oleg318-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg318-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg318-server stop mds service on oleg318-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg318-server unloading modules on: 'oleg318-server' oleg318-server: oleg318-server.virtnet: executing unload_modules_local modules unloaded. ETA 11885s after 120000 files / 283s is too long e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg318-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg318-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 19) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 165 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 166 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 167 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 168 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 169 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 170 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 171 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 172 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 173 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 174 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 175 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 176 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 177 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 178 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 179 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 180 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 181 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 182 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 183 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 184 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 185 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 190 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 191 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] Pass 1: Memory used: 380k/908k (124k/257k), time: 0.02/ 0.01/ 0.01 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 90.97MB/s [Thread 0] Scanned group range [0, 19), inodes 279 Pass 2: Checking directory structure Pass 2: Memory used: 380k/0k (85k/296k), time: 0.33/ 0.28/ 0.04 Pass 2: I/O read: 49MB, write: 0MB, rate: 148.43MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 380k/0k (85k/296k), time: 0.36/ 0.30/ 0.05 Poleg318-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51589120, 266) != expected (51597312, 266) oleg318-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51589120, 266) != expected (51597312, 266) oleg318-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51589120, 266) != expected (51597312, 266) pdsh@oleg318-client: oleg318-server: ssh exited with exit code 4 ass 3: Memory used: 380k/0k (83k/298k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 380k/0k (69k/312k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 1MB, write: 0MB, rate: 7462.69MB/s Pass 5: Checking group summary information Pass 5: Memory used: 380k/0k (68k/313k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 688.23MB/s Update quota info for quota type 0? no Update quota info for quota type 1? no Update quota info for quota type 2? no lustre-MDT0000: ********** WARNING: Filesystem still has errors ********** 276 inodes used (12.11%, out of 2280) 3 non-contiguous files (1.1%) 1 non-contiguous directory (0.4%) # of inodes with ind/dind/tind blocks: 1/1/0 37933 blocks used (6.32%, out of 600000) 0 bad blocks 1 large file 148 regular files 118 directories 0 character device files 0 block device files 0 fifos 120000 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 120264 files Memory used: 380k/0k (67k/314k), time: 0.37/ 0.30/ 0.05 I/O read: 49MB, write: 0MB, rate: 133.74MB/s pdsh@oleg318-client: oleg318-client: ssh exited with exit code 2 pdsh@oleg318-client: oleg318-server: ssh exited with exit code 2 pdsh@oleg318-client: oleg318-client: ssh exited with exit code 2 pdsh@oleg318-client: oleg318-server: ssh exited with exit code 2 Stopping clients: oleg318-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg318-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg318-client: oleg318-server: ssh exited with exit code 2 oleg318-server: oleg318-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_refcnt: could not read integer from '/sys/module/acpi_cpufreq/refcnt': 'No such device' ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg318-server' oleg318-server: oleg318-server.virtnet: executing load_modules_local oleg318-server: Loading modules from /home/green/git/lustre-release/lustre oleg318-server: detected 4 online CPUs by sysfs oleg318-server: Force libcfs to create 2 CPU partitions oleg318-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg318-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey