-----============= acceptance-small: conf-sanity ============----- Wed Apr 17 04:44:13 EDT 2024 excepting tests: 32b 32c 32newtarball 110 Stopping clients: oleg237-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg237-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg237-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg237-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg237-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg237-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg237-server oleg237-server: oleg237-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing load_modules_local oleg237-server: Loading modules from /home/green/git/lustre-release/lustre oleg237-server: detected 4 online CPUs by sysfs oleg237-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg237-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg237-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg237-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg237-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg237-server: oleg237-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 oleg237-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg237-server: oleg237-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 oleg237-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg237-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg237-server == conf-sanity test 45: long unlink handling in ptlrpcd == 04:45:38 (1713343538) start mds service on oleg237-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg237-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg237-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-OST0000 oleg237-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre setup single mount lustre success stop mds service on oleg237-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg237-server sleep 60 sec fail_loc=0x8000050f sleep 10 sec manual umount lustre on /mnt/lustre.... df: '/mnt/lustre': Cannot send after transport endpoint shutdown fail_loc=0x0 start mds service on oleg237-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg237-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg237-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg237-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg237-server LNET ready to unload unloading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing unload_modules_local oleg237-server: LNET ready to unload modules unloaded. pdsh@oleg237-client: oleg237-client: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-client: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 PASS 45 (160s) == conf-sanity test 69: replace an OST with the same index ========================================================== 04:48:18 (1713343698) start mds service on oleg237-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing load_modules_local oleg237-server: Loading modules from /home/green/git/lustre-release/lustre oleg237-server: detected 4 online CPUs by sysfs oleg237-server: Force libcfs to create 2 CPU partitions oleg237-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg237-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg237-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg237-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg237-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-OST0000 oleg237-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre On OST0, 49538 inodes available. Want 99872. rc=0 - open/close 5461 (time 1713343741.52 total 10.00 last 546.04) total: 10000 open/close in 18.16 seconds: 550.71 ops/second - unlinked 0 (time 1713343750 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 5878 (time 1713343770.69 total 10.00 last 587.77) total: 10000 open/close in 17.05 seconds: 586.45 ops/second - unlinked 0 (time 1713343778 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 5381 (time 1713343798.82 total 10.00 last 538.07) total: 10000 open/close in 17.98 seconds: 556.22 ops/second - unlinked 0 (time 1713343807 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 5784 (time 1713343827.36 total 10.00 last 578.33) total: 10000 open/close in 17.48 seconds: 572.13 ops/second - unlinked 0 (time 1713343835 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6982 (time 1713343854.95 total 10.00 last 698.18) total: 10000 open/close in 14.42 seconds: 693.34 ops/second - unlinked 0 (time 1713343860 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 6590 (time 1713343878.98 total 10.00 last 658.98) total: 10000 open/close in 15.25 seconds: 655.61 ops/second - unlinked 0 (time 1713343885 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 6111 (time 1713343904.13 total 10.00 last 611.08) total: 10000 open/close in 15.61 seconds: 640.57 ops/second - unlinked 0 (time 1713343910 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 6684 (time 1713343929.28 total 10.01 last 667.67) total: 10000 open/close in 15.47 seconds: 646.34 ops/second - unlinked 0 (time 1713343935 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second - open/close 5995 (time 1713343954.82 total 10.00 last 599.43) total: 10000 open/close in 16.81 seconds: 594.79 ops/second - unlinked 0 (time 1713343962 ; total 0 ; last 0) total: 10000 unlinks in 9 seconds: 1111.111084 unlinks/second - open/close 6857 (time 1713343982.33 total 10.00 last 685.60) total: 10000 open/close in 15.17 seconds: 659.34 ops/second - unlinked 0 (time 1713343988 ; total 0 ; last 0) total: 10000 unlinks in 8 seconds: 1250.000000 unlinks/second rm: missing operand Try 'rm --help' for more information. umount lustre on /mnt/lustre..... Stopping client oleg237-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg237-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg237-server Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.137@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre-OST0000 kilobytes 200000 options -I 512 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -I 512 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata start ost1 service on oleg237-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Started lustre-OST0000 oleg237-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg237-server: oleg237-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 40 oleg237-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 12 sec oleg237-server: oleg237-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 40 oleg237-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec mount lustre on /mnt/lustre..... Starting client: oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre On OST0, 10429 used inodes rc=0 umount lustre on /mnt/lustre..... Stopping client oleg237-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg237-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg237-server LNET ready to unload unloading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing unload_modules_local oleg237-server: LNET ready to unload modules unloaded. pdsh@oleg237-client: oleg237-client: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-client: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 PASS 69 (346s) == conf-sanity test 111: Adding large_dir with over 2GB directory ========================================================== 04:54:04 (1713344044) oleg237-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity oleg237-server: debugfs 1.46.2.wc5 (26-Mar-2022) Supported features: dir_prealloc imagic_inodes has_journal ext_attr resize_inode dir_index sparse_super2 fast_commit stable_inodes filetype needs_recovery journal_dev meta_bg extent 64bit mmp flex_bg ea_inode dirdata metadata_csum_seed large_dir inline_data encrypt casefold sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota bigalloc metadata_csum read-only project shared_blocks verity umount lustre on /mnt/lustre..... stop ost1 service on oleg237-server stop mds service on oleg237-server stop mds service on oleg237-server unloading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing unload_modules_local modules unloaded. MDT params: --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity --backfstype=ldiskfs --device-size=2400000 --mkfsoptions=\"-O ea_inode,large_dir -E lazy_itable_init\" --reformat /dev/mapper/mds1_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing load_modules_local oleg237-server: Loading modules from /home/green/git/lustre-release/lustre oleg237-server: detected 4 online CPUs by sysfs oleg237-server: Force libcfs to create 2 CPU partitions oleg237-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg237-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg237-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 2400000 options -J size=93 -I 1024 -i 2560 -q -O ea_inode,large_dir,dirdata,uninit_bg,^extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -J size=93 -I 1024 -i 2560 -q -O ea_inode,large_dir,dirdata,uninit_bg,^extents,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init="0" -F /dev/mapper/mds1_flakey 2400000k Writing CONFIGS/mountdata Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 OST params: --mgsnode=oleg237-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O large_dir -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.137@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -I 512 -q -O large_dir,extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -I 512 -q -O large_dir,extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init="0" -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg237-server: oleg237-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 mount lustre on /mnt/lustre..... Starting client: oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre Starting client oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre Started clients oleg237-client.virtnet: 192.168.202.137@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) mount lustre on /mnt/lustre2..... Starting client: oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre2 Starting client oleg237-client.virtnet: -o user_xattr,flock oleg237-server@tcp:/lustre /mnt/lustre2 Started clients oleg237-client.virtnet: 192.168.202.137@tcp:/lustre on /mnt/lustre2 type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1326500 1696 1206392 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 142216 1252 126964 1% /mnt/lustre[OST:0] filesystem_summary: 142216 1252 126964 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 960000 272 959728 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 50000 268 49732 1% /mnt/lustre[OST:0] filesystem_summary: 50004 272 49732 1% /mnt/lustre creating 60000 hardlinks to oleg237-client.virtnet-0 creating 60000 hardlinks to oleg237-client.virtnet-1 waiting for PIDs 27126 27140 to complete - link 4298 (time 1713344088.77 total 10.00 last 429.74) - link 7590 (time 1713344098.77 total 20.00 last 329.18) - link 10000 (time 1713344103.81 total 25.04 last 478.22) - link 14762 (time 1713344113.81 total 35.04 last 476.20) - link 19388 (time 1713344123.81 total 45.04 last 462.55) - link 23645 (time 1713344133.81 total 55.04 last 425.68) - link 28261 (time 1713344143.81 total 65.05 last 461.50) - link 30000 (time 1713344147.53 total 68.76 last 467.55) - link 35055 (time 1713344157.53 total 78.77 last 505.45) - link 37841 (time 1713344167.55 total 88.78 last 278.13) - link 39001 (time 1713344177.55 total 98.79 last 115.95) - link 40000 (time 1713344186.41 total 107.65 last 112.75) - link 41207 (time 1713344196.42 total 117.65 last 120.64) - link 43514 (time 1713344206.42 total 127.66 last 230.59) - link 46008 (time 1713344216.46 total 137.69 last 248.45) - link 48418 (time 1713344226.46 total 147.70 last 240.91) - link 50000 (time 1713344235.57 total 156.81 last 173.63) - link 51799 (time 1713344245.58 total 166.81 last 179.84) - link 54162 (time 1713344255.58 total 176.81 last 236.30) - link 57057 (time 1713344265.58 total 186.82 last 289.41) total: 60000 link in 195.00 seconds: 307.69 ops/second - link 4294 (time 1713344088.86 total 10.00 last 429.33) - link 7579 (time 1713344098.86 total 20.00 last 328.43) - link 10000 (time 1713344103.93 total 25.08 last 477.28) - link 14726 (time 1713344113.93 total 35.08 last 472.57) - link 19361 (time 1713344123.94 total 45.08 last 463.47) - link 23630 (time 1713344133.94 total 55.08 last 426.82) - link 28237 (time 1713344143.94 total 65.08 last 460.66) - link 30000 (time 1713344147.66 total 68.81 last 473.29) - link 35044 (time 1713344157.66 total 78.81 last 504.36) - link 37782 (time 1713344167.68 total 88.82 last 273.40) - link 38945 (time 1713344177.68 total 98.82 last 116.29) - link 40000 (time 1713344186.99 total 108.13 last 113.29) - link 41257 (time 1713344197.00 total 118.14 last 125.61) - link 43584 (time 1713344207.00 total 128.14 last 232.69) - link 46093 (time 1713344217.00 total 138.15 last 250.80) - link 48490 (time 1713344227.01 total 148.15 last 239.62) - link 50000 (time 1713344235.75 total 156.89 last 172.68) - link 51775 (time 1713344245.75 total 166.89 last 177.48) - link 54158 (time 1713344255.75 total 176.90 last 238.28) - link 57063 (time 1713344265.75 total 186.90 last 290.44) total: 60000 link in 195.06 seconds: 307.60 ops/second estimate 8215s left after 120000 files / 195s umount lustre on /mnt/lustre2..... Stopping client oleg237-client.virtnet /mnt/lustre2 (opts:-f) umount lustre on /mnt/lustre..... Stopping client oleg237-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg237-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg237-server stop mds service on oleg237-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg237-server LNET ready to unload unloading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing unload_modules_local oleg237-server: LNET ready to unload modules unloaded. ETA 8215s after 120000 files / 195s is too long e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg237-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg237-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 30) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 186 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 187 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 188 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 189 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 190 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 191 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 192 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 193 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 194 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 195 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 196 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 197 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 198 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 199 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 200 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 201 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 202 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 203 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 204 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 205 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 206 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 207 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 208 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 209 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 210 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 211 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 212 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 213 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 214 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 215 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 216 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 217 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 218 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 219 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 220 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 222 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 223 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 224 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 225 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 226 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 227 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 228 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 229 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 230 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 231 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 232 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 233 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 234 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 235 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 236 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 237 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 238 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 239 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 240 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 241 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 242 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 243 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 244 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 245 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 246 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 247 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 248 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 249 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 250 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 251 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 252 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 253 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 254 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 255 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 256 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 257 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 258 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 259 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 260 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 261 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 262 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 263 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 264 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 265 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 266 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 267 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 268 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 269 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 270 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 271 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 276 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 277 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] group 21 finished [Thread 0] group 22 finished [Thread 0] group 23 finished [Thread 0] group 24 finished [Thread 0] group 25 finished [Thread 0] group 26 finished [Thread 0] group 27 finished [Thread 0] group 28 finished [Thread 0] group 29 finished [Thread 0] group 30 finished [Thread 0] Pass 1: Memory used: 376k/1212k (121k/256k), time: 0.01/ 0.01/ 0.01 [Thread 0] Pass 1: I/O read: 2MB, write: 0MBoleg237-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51437568, 266) != expected (51445760, 266) oleg237-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51437568, 266) != expected (51445760, 266) oleg237-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (51437568, 266) != expected (51445760, 266) pdsh@oleg237-client: oleg237-server: ssh exited with exit code 4 , rate: 167.86MB/s [Thread 0] Scanned group range [0, 30), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 376k/304k (81k/296k), time: 0.32/ 0.27/ 0.04 Pass 2: I/O read: 49MB, write: 0MB, rate: 154.37MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 376k/304k (82k/295k), time: 0.36/ 0.31/ 0.05 Pass 3: Memory used: 376k/304k (80k/297k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 376k/0k (69k/308k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 1MB, write: 0MB, rate: 42.35MB/s Pass 5: Checking group summary information Pass 5: Memory used: 376k/0k (68k/309k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 383.14MB/s Update quota info for quota type 0? no Update quota info for quota type 1? no Update quota info for quota type 2? no lustre-MDT0000: ********** WARNING: Filesystem still has errors ********** 276 inodes used (0.03%, out of 960000) 3 non-contiguous files (1.1%) 1 non-contiguous directory (0.4%) # of inodes with ind/dind/tind blocks: 1/1/0 278356 blocks used (46.39%, out of 600000) 0 bad blocks 1 large file 148 regular files 118 directories 0 character device files 0 block device files 0 fifos 120000 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 120264 files Memory used: 376k/0k (67k/310k), time: 0.39/ 0.33/ 0.05 I/O read: 49MB, write: 0MB, rate: 125.43MB/s pdsh@oleg237-client: oleg237-client: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-client: ssh exited with exit code 2 pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 Stopping clients: oleg237-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg237-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg237-client: oleg237-server: ssh exited with exit code 2 oleg237-server: oleg237-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing load_modules_local oleg237-server: Loading modules from /home/green/git/lustre-release/lustre oleg237-server: detected 4 online CPUs by sysfs oleg237-server: Force libcfs to create 2 CPU partitions oleg237-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg237-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey PASS 111 (285s) Stopping clients: oleg237-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg237-client.virtnet /mnt/lustre2 (opts:-f) oleg237-server: oleg237-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg237-server' oleg237-server: oleg237-server.virtnet: executing load_modules_local oleg237-server: Loading modules from /home/green/git/lustre-release/lustre oleg237-server: detected 4 online CPUs by sysfs oleg237-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey == conf-sanity test complete, duration 892 sec =========== 04:59:05 (1713344345)