-----============= acceptance-small: conf-sanity ============----- Wed Apr 17 17:01:46 EDT 2024 excepting tests: 32b 32c 32newtarball 110 Stopping clients: oleg209-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg209-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg209-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg209-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg209-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg209-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg209-server oleg209-server: oleg209-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing load_modules_local oleg209-server: Loading modules from /home/green/git/lustre-release/lustre oleg209-server: detected 4 online CPUs by sysfs oleg209-server: Force libcfs to create 2 CPU partitions oleg209-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg209-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg209-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg209-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg209-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg209-server: oleg209-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 oleg209-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg209-server: oleg209-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 oleg209-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg209-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg209-server == conf-sanity test 45: long unlink handling in ptlrpcd == 17:03:04 (1713387784) start mds service on oleg209-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg209-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg209-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-OST0000 oleg209-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre setup single mount lustre success stop mds service on oleg209-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg209-server sleep 60 sec fail_loc=0x8000050f sleep 10 sec manual umount lustre on /mnt/lustre.... df: '/mnt/lustre': Cannot send after transport endpoint shutdown fail_loc=0x0 start mds service on oleg209-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg209-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg209-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg209-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg209-server LNET ready to unload unloading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing unload_modules_local oleg209-server: LNET ready to unload modules unloaded. pdsh@oleg209-client: oleg209-client: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-client: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 PASS 45 (163s) == conf-sanity test 69: replace an OST with the same index ========================================================== 17:05:47 (1713387947) start mds service on oleg209-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing load_modules_local oleg209-server: Loading modules from /home/green/git/lustre-release/lustre oleg209-server: detected 4 online CPUs by sysfs oleg209-server: Force libcfs to create 2 CPU partitions oleg209-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg209-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg209-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg209-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg209-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-OST0000 oleg209-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre On OST0, 49538 inodes available. Want 99872. rc=0 - open/close 3722 (time 1713387992.69 total 10.00 last 372.15) - open/close 8552 (time 1713388002.69 total 20.00 last 482.99) total: 10000 open/close in 23.25 seconds: 430.12 ops/second - unlinked 0 (time 1713388006 ; total 0 ; last 0) total: 10000 unlinks in 13 seconds: 769.230774 unlinks/second - open/close 3347 (time 1713388030.42 total 10.00 last 334.69) - open/close 6232 (time 1713388040.42 total 20.00 last 288.44) - open/close 9048 (time 1713388050.42 total 30.00 last 281.58) total: 10000 open/close in 33.38 seconds: 299.56 ops/second - unlinked 0 (time 1713388054 ; total 0 ; last 0) total: 10000 unlinks in 12 seconds: 833.333313 unlinks/second - open/close 3207 (time 1713388078.16 total 10.00 last 320.70) - open/close 8233 (time 1713388088.16 total 20.00 last 502.58) total: 10000 open/close in 22.88 seconds: 437.05 ops/second - unlinked 0 (time 1713388091 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 6134 (time 1713388110.20 total 10.00 last 613.39) total: 10000 open/close in 16.31 seconds: 613.00 ops/second - unlinked 0 (time 1713388117 ; total 0 ; last 0) total: 10000 unlinks in 7 seconds: 1428.571411 unlinks/second - open/close 6049 (time 1713388135.51 total 10.00 last 604.84) total: 10000 open/close in 16.60 seconds: 602.50 ops/second - unlinked 0 (time 1713388142 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 5707 (time 1713388162.12 total 10.00 last 570.63) total: 10000 open/close in 17.05 seconds: 586.60 ops/second - unlinked 0 (time 1713388169 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 2942 (time 1713388188.29 total 10.00 last 294.11) - open/close 5994 (time 1713388198.29 total 20.00 last 305.15) - open/close 8872 (time 1713388208.30 total 30.01 last 287.72) total: 10000 open/close in 34.13 seconds: 293.00 ops/second - unlinked 0 (time 1713388213 ; total 0 ; last 0) total: 10000 unlinks in 14 seconds: 714.285706 unlinks/second - open/close 2788 (time 1713388238.98 total 10.00 last 278.74) - open/close 5546 (time 1713388248.98 total 20.00 last 275.79) - open/close 8452 (time 1713388258.98 total 30.00 last 290.58) total: 10000 open/close in 35.15 seconds: 284.53 ops/second - unlinked 0 (time 1713388265 ; total 0 ; last 0) total: 10000 unlinks in 10 seconds: 1000.000000 unlinks/second - open/close 3261 (time 1713388286.08 total 10.00 last 325.99) - open/close 6260 (time 1713388296.08 total 20.01 last 299.81) - open/close 9111 (time 1713388306.09 total 30.01 last 285.03) total: 10000 open/close in 33.05 seconds: 302.58 ops/second - unlinked 0 (time 1713388310 ; total 0 ; last 0) total: 10000 unlinks in 14 seconds: 714.285706 unlinks/second - open/close 3042 (time 1713388335.83 total 10.00 last 304.20) - open/close 5855 (time 1713388345.83 total 20.00 last 281.29) - open/close 8597 (time 1713388355.83 total 30.00 last 274.10) total: 10000 open/close in 35.15 seconds: 284.53 ops/second - unlinked 0 (time 1713388362 ; total 0 ; last 0) total: 10000 unlinks in 13 seconds: 769.230774 unlinks/second rm: missing operand Try 'rm --help' for more information. umount lustre on /mnt/lustre..... Stopping client oleg209-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg209-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg209-server Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.109@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre-OST0000 kilobytes 200000 options -I 512 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -I 512 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start ost1 service on oleg209-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Started lustre-OST0000 oleg209-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg209-server: oleg209-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 oleg209-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec oleg209-server: oleg209-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 oleg209-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec mount lustre on /mnt/lustre..... Starting client: oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre On OST0, 10429 used inodes rc=0 umount lustre on /mnt/lustre..... Stopping client oleg209-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg209-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg209-server LNET ready to unload unloading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing unload_modules_local oleg209-server: LNET ready to unload modules unloaded. pdsh@oleg209-client: oleg209-client: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-client: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 PASS 69 (472s) == conf-sanity test 111: Adding large_dir with over 2GB directory ========================================================== 17:13:39 (1713388419) oleg209-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity oleg209-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity umount lustre on /mnt/lustre..... stop ost1 service on oleg209-server stop mds service on oleg209-server stop mds service on oleg209-server unloading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing unload_modules_local modules unloaded. MDT params: --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity --backfstype=ldiskfs --device-size=2400000 --mkfsoptions=\"-O ea_inode,large_dir -E lazy_itable_init\" --reformat /dev/mapper/mds1_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing load_modules_local oleg209-server: Loading modules from /home/green/git/lustre-release/lustre oleg209-server: detected 4 online CPUs by sysfs oleg209-server: Force libcfs to create 2 CPU partitions oleg209-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg209-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 2400000 options -J size=93 -I 1024 -i 2560 -q -O ea_inode,large_dir,dirdata,uninit_bg,^extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -J size=93 -I 1024 -i 2560 -q -O ea_inode,large_dir,dirdata,uninit_bg,^extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 2400000k Writing CONFIGS/mountdata Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 OST params: --mgsnode=oleg209-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O large_dir -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.109@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -I 512 -q -O large_dir,extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -I 512 -q -O large_dir,extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg209-server: oleg209-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 mount lustre on /mnt/lustre..... Starting client: oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre Starting client oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre Started clients oleg209-client.virtnet: 192.168.202.109@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) mount lustre on /mnt/lustre2..... Starting client: oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre2 Starting client oleg209-client.virtnet: -o user_xattr,flock oleg209-server@tcp:/lustre /mnt/lustre2 Started clients oleg209-client.virtnet: 192.168.202.109@tcp:/lustre on /mnt/lustre2 type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1326500 1696 1206392 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 142216 1252 126964 1% /mnt/lustre[OST:0] filesystem_summary: 142216 1252 126964 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 960000 272 959728 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 50000 268 49732 1% /mnt/lustre[OST:0] filesystem_summary: 50004 272 49732 1% /mnt/lustre creating 60000 hardlinks to oleg209-client.virtnet-0 creating 60000 hardlinks to oleg209-client.virtnet-1 waiting for PIDs 27205 27219 to complete - link 2597 (time 1713388471.08 total 10.00 last 259.63) - link 5363 (time 1713388481.08 total 20.00 last 276.56) - link 8036 (time 1713388491.08 total 30.01 last 267.28) - link 10000 (time 1713388498.04 total 36.97 last 282.14) - link 12694 (time 1713388508.05 total 46.97 last 269.33) - link 15312 (time 1713388518.05 total 56.97 last 261.80) - link 17935 (time 1713388528.05 total 66.97 last 262.23) - link 20000 (time 1713388535.19 total 74.11 last 289.37) - link 22562 (time 1713388545.19 total 84.11 last 256.18) - link 25204 (time 1713388555.19 total 94.11 last 264.16) - link 27952 (time 1713388565.19 total 104.11 last 274.71) - link 30000 (time 1713388571.71 total 110.63 last 314.32) - link 32915 (time 1713388581.71 total 120.63 last 291.40) - link 35706 (time 1713388591.71 total 130.63 last 279.07) - link 38325 (time 1713388601.71 total 140.63 last 261.89) - link 40000 (time 1713388608.22 total 147.14 last 257.34) - link 42566 (time 1713388618.22 total 157.14 last 256.59) - link 45112 (time 1713388628.22 total 167.15 last 254.55) - link 47770 (time 1713388638.23 total 177.15 last 265.73) - link 50000 (time 1713388646.83 total 185.75 last 259.30) - link 52543 (time 1713388656.83 total 195.75 last 254.23) - link 55321 (time 1713388666.83 total 205.75 last 277.70) - link 57942 (time 1713388676.83 total 215.76 last 262.09) total: 60000 link in 223.96 seconds: 267.90 ops/second - link 2594 (time 1713388471.24 total 10.00 last 259.33) - link 5374 (time 1713388481.24 total 20.00 last 277.97) - link 8034 (time 1713388491.24 total 30.01 last 265.96) - link 10000 (time 1713388498.19 total 36.96 last 282.61) - link 12678 (time 1713388508.20 total 46.96 last 267.75) - link 15298 (time 1713388518.20 total 56.96 last 261.98) - link 17912 (time 1713388528.20 total 66.97 last 261.35) - link 20000 (time 1713388535.46 total 74.22 last 287.67) - link 22564 (time 1713388545.46 total 84.23 last 256.39) - link 25215 (time 1713388555.46 total 94.23 last 265.01) - link 27968 (time 1713388565.46 total 104.23 last 275.25) - link 30000 (time 1713388571.83 total 110.59 last 319.36) - link 32889 (time 1713388581.83 total 120.60 last 288.77) - link 35669 (time 1713388591.83 total 130.60 last 277.92) - link 38283 (time 1713388601.84 total 140.60 last 261.33) - link 40000 (time 1713388608.57 total 147.33 last 255.05) - link 42555 (time 1713388618.57 total 157.34 last 255.47) - link 45102 (time 1713388628.57 total 167.34 last 254.62) - link 47799 (time 1713388638.57 total 177.34 last 269.63) - link 50000 (time 1713388647.15 total 185.92 last 256.53) - link 52532 (time 1713388657.15 total 195.92 last 253.18) - link 55316 (time 1713388667.16 total 205.93 last 278.30) - link 57925 (time 1713388677.16 total 215.93 last 260.86) total: 60000 link in 224.20 seconds: 267.62 ops/second estimate 9435s left after 120000 files / 225s umount lustre on /mnt/lustre2..... Stopping client oleg209-client.virtnet /mnt/lustre2 (opts:-f) umount lustre on /mnt/lustre..... Stopping client oleg209-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg209-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg209-server stop mds service on oleg209-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg209-server LNET ready to unload unloading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing unload_modules_local oleg209-server: LNET ready to unload modules unloaded. ETA 9435s after 120000 files / 225s is too long e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg209-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg209-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 30) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 186 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 187 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 188 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 189 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 190 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 191 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 192 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 193 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 194 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 195 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 196 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 197 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 198 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 199 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 200 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 201 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 202 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 203 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 204 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 205 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 206 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 207 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 208 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 209 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 210 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 211 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 212 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 213 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 214 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 215 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 216 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 217 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 218 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 219 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 220 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 221 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 223 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 224 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 225 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 226 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 227 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 228 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 229 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 230 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 231 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 232 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 233 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 234 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 235 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 236 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 237 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 238 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 239 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 240 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 241 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 242 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 243 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 244 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 245 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 246 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 247 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 248 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 249 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 250 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 251 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 252 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 253 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 254 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 255 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 256 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 257 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 258 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 259 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 260 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 261 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 262 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 263 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 264 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 265 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 266 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 267 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 268 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 269 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 270 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 271 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 276 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 277 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] group 21 finished [Thread 0] group 22 finished [Thread 0] group 23 finished [Thread 0] group 24 finished [Thread 0] group 25 finished [Thread 0] group 26 finished [Thread 0] group 27 finished [Thread 0] group 28 finished [Thread 0] group 29 finished [Thread 0] group 30 finished [Thread 0] Pass 1: Memory used: 376k/1216k (121k/256k), time: 0.02/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MBoleg209-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51662848, 266) != expected (51671040, 266) oleg209-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51662848, 266) != expected (51671040, 266) oleg209-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51662848, 266) != expected (51671040, 266) pdsh@oleg209-client: oleg209-server: ssh exited with exit code 4 , rate: 122.10MB/s [Thread 0] Scanned group range [0, 30), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 376k/304k (81k/296k), time: 0.35/ 0.28/ 0.07 Pass 2: I/O read: 49MB, write: 0MB, rate: 141.33MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 376k/304k (81k/296k), time: 0.43/ 0.34/ 0.08 Pass 3: Memory used: 376k/304k (80k/297k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 376k/0k (69k/308k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 1MB, write: 0MB, rate: 51.05MB/s Pass 5: Checking group summary information Pass 5: Memory used: 376k/0k (68k/309k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 445.63MB/s Update quota info for quota type 0? no Update quota info for quota type 1? no Update quota info for quota type 2? no lustre-MDT0000: ********** WARNING: Filesystem still has errors ********** 276 inodes used (0.03%, out of 960000) 3 non-contiguous files (1.1%) 1 non-contiguous directory (0.4%) # of inodes with ind/dind/tind blocks: 1/1/0 278411 blocks used (46.40%, out of 600000) 0 bad blocks 1 large file 148 regular files 118 directories 0 character device files 0 block device files 0 fifos 120000 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 120264 files Memory used: 376k/0k (67k/310k), time: 0.45/ 0.36/ 0.08 I/O read: 49MB, write: 0MB, rate: 108.56MB/s pdsh@oleg209-client: oleg209-client: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-client: ssh exited with exit code 2 pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 Stopping clients: oleg209-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg209-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg209-client: oleg209-server: ssh exited with exit code 2 oleg209-server: oleg209-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing load_modules_local oleg209-server: Loading modules from /home/green/git/lustre-release/lustre oleg209-server: detected 4 online CPUs by sysfs oleg209-server: Force libcfs to create 2 CPU partitions oleg209-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg209-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey PASS 111 (327s) Stopping clients: oleg209-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg209-client.virtnet /mnt/lustre2 (opts:-f) oleg209-server: oleg209-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg209-server' oleg209-server: oleg209-server.virtnet: executing load_modules_local oleg209-server: Loading modules from /home/green/git/lustre-release/lustre oleg209-server: detected 4 online CPUs by sysfs oleg209-server: Force libcfs to create 2 CPU partitions oleg209-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey == conf-sanity test complete, duration 1058 sec ========== 17:19:25 (1713388765)