-----============= acceptance-small: conf-sanity ============----- Thu Apr 18 20:16:24 EDT 2024 excepting tests: 102 106 115 32newtarball 110 41c skipping tests SLOW=no: 45 69 106 111 114 Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg419-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76a: set permanent params with lctl across mounts ========================================================== 20:17:48 (1713485868) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Change MGS params max_dirty_mb: 467 new_max_dirty_mb: 457 Waiting 90s for '457' Updated after 2s: want '457' got '457' 457 Check the value is stored after remount Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b47b2800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b47b2800.idle_timeout=debug disable quota as required Change OST params client_cache_count: 128 new_client_cache_count: 256 Waiting 90s for '256' 256 Check the value is stored after remount Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a9db5800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a9db5800.idle_timeout=debug disable quota as required 256 Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server PASS 76a (157s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76b: verify params log setup correctly ========================================================== 20:20:27 (1713486027) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a9d19800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a9d19800.idle_timeout=debug disable quota as required mgs.MGS.live.params= fsname: params flags: 0x20 gen: 2 Secure RPC Config Rules: imperative_recovery_state: state: startup nonir_clients: 0 nidtbl_version: 2 notify_duration_total: 0.000000000 notify_duation_max: 0.000000000 notify_count: 0 Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server PASS 76b (65s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76c: verify changelog_mask is applied with lctl set_param -P ========================================================== 20:21:33 (1713486093) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8801373ab800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8801373ab800.idle_timeout=debug disable quota as required Change changelog_mask pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Check the value is stored after mds remount stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 17 sec oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server PASS 76c (106s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76d: verify llite.*.xattr_cache can be set by 'lctl set_param -P' correctly ========================================================== 20:23:20 (1713486200) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800aa852800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800aa852800.idle_timeout=debug disable quota as required lctl set_param -P llite.*.xattr_cache=0 Waiting 90s for '0' Updated after 2s: want '0' got '0' Check llite.*.xattr_cache on client /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Check llite.*.xattr_cache on the new client /mnt/lustre2 mount lustre on /mnt/lustre2..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre2 umount lustre on /mnt/lustre2..... Stopping client oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server PASS 76d (49s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 77: comma-separated MGS NIDs and failover node NIDs ========================================================== 20:24:11 (1713486251) SKIP: conf-sanity test_77 mixed loopback and real device not working SKIP 77 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 78: run resize2fs on MDT and OST filesystems ========================================================== 20:24:13 (1713486253) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format ost1: /dev/mapper/ost1_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre create test files UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 83240 1616 73832 3% /mnt/lustre[MDT:0] lustre-OST0000_UUID 124712 1388 110724 2% /mnt/lustre[OST:0] filesystem_summary: 124712 1388 110724 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 72000 272 71728 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 45008 302 44706 1% /mnt/lustre[OST:0] filesystem_summary: 44978 272 44706 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0454052 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.045158 s, 23.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0442897 s, 23.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0457308 s, 22.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0471091 s, 22.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0437665 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.046493 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.065699 s, 16.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0437027 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0468186 s, 22.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0444502 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0463805 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0444835 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0445474 s, 23.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0422088 s, 24.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0477895 s, 21.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0423506 s, 24.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0498787 s, 21.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0453006 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0463516 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0463233 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0473775 s, 22.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0434083 s, 24.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0475193 s, 22.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0440396 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0479573 s, 21.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0470015 s, 22.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0415175 s, 25.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0467971 s, 22.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0443711 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0472368 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0469079 s, 22.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0450766 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0502009 s, 20.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0484815 s, 21.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0505669 s, 20.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0464589 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0445701 s, 23.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0439938 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0452496 s, 23.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0473688 s, 22.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0452512 s, 23.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0477263 s, 22.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0470466 s, 22.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.044005 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0466265 s, 22.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0440465 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0733129 s, 14.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0437792 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0443845 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0444717 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438545 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0456521 s, 23.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0463724 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0464212 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0457548 s, 22.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0436638 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.046634 s, 22.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0436413 s, 24.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0435513 s, 24.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0424197 s, 24.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0472962 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0428782 s, 24.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0460942 s, 22.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0417146 s, 25.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0423419 s, 24.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0431826 s, 24.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0401356 s, 26.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0457694 s, 22.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0444513 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0458179 s, 22.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0463476 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0753691 s, 13.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.046258 s, 22.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0484822 s, 21.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0466463 s, 22.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0474188 s, 22.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0468031 s, 22.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0532014 s, 19.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0502775 s, 20.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0469635 s, 22.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0719221 s, 14.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0510437 s, 20.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0504814 s, 20.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0456013 s, 23.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0741799 s, 14.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.481522 s, 2.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438609 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0396529 s, 26.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0646759 s, 16.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0467809 s, 22.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0429774 s, 24.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.045992 s, 22.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0454723 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0485392 s, 21.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0585376 s, 17.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0469641 s, 22.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.051496 s, 20.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0670892 s, 15.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0861586 s, 12.2 MB/s umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 272k/0k (141k/132k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 269.91MB/s [Thread 0] Scanned group range [0, 3), inodes 373 Pass 2: Checking directory structure Pass 2: Memory used: 272k/0k (95k/178k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 358.68MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 272k/0k (95k/178k), time: 0.01/ 0.00/ 0.01 Pass 3A: Memory used: 272k/0k (95k/178k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 272k/0k (93k/180k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 6024.10MB/s Pass 4: Checking reference counts Pass 4: Memory used: 272k/0k (67k/206k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 272k/0k (67k/206k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 348.55MB/s 372 inodes used (0.52%, out of 72000) 4 non-contiguous files (1.1%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 22546 blocks used (50.10%, out of 45000) 0 bad blocks 1 large file 244 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 362 files Memory used: 272k/0k (66k/207k), time: 0.02/ 0.01/ 0.01 I/O read: 1MB, write: 1MB, rate: 49.00MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 2) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] Pass 1: Memory used: 264k/0k (132k/133k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 531.29MB/s [Thread 0] Scanned group range [0, 2), inodes 398 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (87k/178k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 252.84MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (92k/173k), time: 0.02/ 0.01/ 0.01 Pass 3A: Memory used: 264k/0k (92k/173k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 264k/0k (84k/181k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 1782.53MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (65k/200k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 264k/0k (65k/200k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 431.78MB/s 398 inodes used (0.88%, out of 45008) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 392 37721 blocks used (83.82%, out of 45000) 0 bad blocks 1 large file 216 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 388 files Memory used: 264k/0k (64k/201k), time: 0.03/ 0.01/ 0.01 I/O read: 2MB, write: 1MB, rate: 73.77MB/s oleg419-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/mds1_flakey to 640000 (4k) blocks. The filesystem on /dev/mapper/mds1_flakey is now 640000 (4k) blocks long. oleg419-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/ost1_flakey to 1048576 (4k) blocks. The filesystem on /dev/mapper/ost1_flakey is now 1048576 (4k) blocks long. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 33) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] group 33 finished [Thread 1] Pass 1: Memory used: 632k/0k (380k/253k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 1531.39MB/s [Thread 1] Scanned group range [16, 33), inodes 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 688k/0k (355k/334k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 268.24MB/s [Thread 0] Scanned group range [0, 16), inodes 373 Pass 2: Checking directory structure Pass 2: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 383.00MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 632k/0k (200k/433k), time: 0.03/ 0.03/ 0.00 Pass 3A: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 632k/0k (198k/435k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 7299.27MB/s Pass 4: Checking reference counts Pass 4: Memory used: 632k/0k (72k/561k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 632k/0k (70k/563k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 277.47MB/s 372 inodes used (0.05%, out of 792000) 4 non-contiguous files (1.1%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 202726 blocks used (31.68%, out of 640000) 0 bad blocks 1 large file 244 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 362 files Memory used: 632k/0k (69k/564k), time: 0.06/ 0.06/ 0.00 I/O read: 1MB, write: 1MB, rate: 16.20MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 32) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] group 21 finished [Thread 0] group 22 finished [Thread 0] group 23 finished [Thread 0] group 24 finished [Thread 0] group 25 finished [Thread 0] group 26 finished [Thread 0] group 27 finished [Thread 0] group 28 finished [Thread 0] group 29 finished [Thread 0] group 30 finished [Thread 0] group 31 finished [Thread 0] group 32 finished [Thread 0] Pass 1: Memory used: 468k/0k (344k/125k), time: 0.01/ 0.00/ 0.01 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 604.81MB/s [Thread 0] Scanned group range [0, 32), inodes 398 Pass 2: Checking directory structure Pass 2: Memory used: 680k/0k (299k/382k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 375.09MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 680k/0k (299k/382k), time: 0.03/ 0.02/ 0.01 Pass 3A: Memory used: 680k/0k (299k/382k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 680k/0k (296k/385k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 7692.31MB/s Pass 4: Checking reference counts Pass 4: Memory used: 564k/0k (66k/499k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 564k/0k (65k/500k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 224.97MB/s 398 inodes used (0.06%, out of 720128) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 392 128315 blocks used (12.24%, out of 1048576) 0 bad blocks 1 large file 216 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 388 files Memory used: 564k/0k (64k/501k), time: 0.05/ 0.04/ 0.01 I/O read: 2MB, write: 1MB, rate: 39.45MB/s start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre check files after expanding the MDT and OST filesystems /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has size 1048576 OK create more files after expanding the MDT and OST filesystems 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0405518 s, 25.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0374524 s, 28.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0356862 s, 29.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.035701 s, 29.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0411843 s, 25.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0362037 s, 29.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0402443 s, 26.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0629294 s, 16.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.037546 s, 27.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0357791 s, 29.3 MB/s umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 33) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] group 33 finished [Thread 1] Pass 1: Memory used: 632k/0k (380k/253k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 1552.79MB/s [Thread 1] Scanned group range [16, 33), inodes 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 688k/0k (355k/334k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 205.68MB/s [Thread 0] Scanned group range [0, 16), inodes 383 Pass 2: Checking directory structure Pass 2: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 242.72MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 632k/0k (200k/433k), time: 0.04/ 0.03/ 0.00 Pass 3A: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 632k/0k (198k/435k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 5555.56MB/s Pass 4: Checking reference counts Pass 4: Memory used: 632k/0k (72k/561k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 632k/0k (70k/563k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 258.93MB/s 382 inodes used (0.05%, out of 792000) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 202726 blocks used (31.68%, out of 640000) 0 bad blocks 1 large file 254 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 372 files Memory used: 632k/0k (69k/564k), time: 0.07/ 0.06/ 0.01 I/O read: 1MB, write: 1MB, rate: 14.58MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 32) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] group 21 finished [Thread 0] group 22 finished [Thread 0] group 23 finished [Thread 0] group 24 finished [Thread 0] group 25 finished [Thread 0] group 26 finished [Thread 0] group 27 finished [Thread 0] group 28 finished [Thread 0] group 29 finished [Thread 0] group 30 finished [Thread 0] group 31 finished [Thread 0] group 32 finished [Thread 0] Pass 1: Memory used: 472k/0k (345k/128k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 644.08MB/s [Thread 0] Scanned group range [0, 32), inodes 402 Pass 2: Checking directory structure Pass 2: Memory used: 684k/0k (299k/386k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 310.17MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 684k/0k (299k/386k), time: 0.03/ 0.02/ 0.01 Pass 3A: Memory used: 684k/0k (299k/386k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 684k/0k (296k/389k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 6944.44MB/s Pass 4: Checking reference counts Pass 4: Memory used: 568k/0k (67k/502k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 568k/0k (65k/504k), time: 0.01/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 197.67MB/s 402 inodes used (0.06%, out of 720128) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 396 130875 blocks used (12.48%, out of 1048576) 0 bad blocks 1 large file 220 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 392 files Memory used: 568k/0k (64k/505k), time: 0.05/ 0.04/ 0.01 I/O read: 2MB, write: 1MB, rate: 39.62MB/s oleg419-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/mds1_flakey to 377837 (4k) blocks. The filesystem on /dev/mapper/mds1_flakey is now 377837 (4k) blocks long. oleg419-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/ost1_flakey to 591846 (4k) blocks. The filesystem on /dev/mapper/ost1_flakey is now 589824 (4k) blocks long. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 20) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] Pass 1: Memory used: 400k/0k (270k/131k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 308.83MB/s [Thread 0] Scanned group range [0, 20), inodes 383 Pass 2: Checking directory structure Pass 2: Memory used: 576k/0k (224k/353k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 411.52MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 576k/0k (224k/353k), time: 0.02/ 0.01/ 0.00 Pass 3A: Memory used: 576k/0k (224k/353k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 576k/0k (222k/355k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 13333.33MB/s Pass 4: Checking reference counts Pass 4: Memory used: 500k/0k (68k/433k), time: 0.01/ 0.01/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 500k/0k (67k/434k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 350.14MB/s 382 inodes used (0.08%, out of 480000) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 124660 blocks used (32.99%, out of 377837) 0 bad blocks 1 large file 254 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 372 files Memory used: 500k/0k (66k/435k), time: 0.04/ 0.03/ 0.00 I/O read: 1MB, write: 1MB, rate: 27.70MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 18) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] Pass 1: Memory used: 372k/0k (246k/127k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 659.54MB/s [Thread 0] Scanned group range [0, 18), inodes 402 Pass 2: Checking directory structure Pass 2: Memory used: 532k/0k (200k/333k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 448.43MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 532k/0k (200k/333k), time: 0.02/ 0.01/ 0.01 Pass 3A: Memory used: 532k/0k (200k/333k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 532k/0k (198k/335k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 7692.31MB/s Pass 4: Checking reference counts Pass 4: Memory used: 468k/0k (66k/403k), time: 0.01/ 0.01/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 468k/0k (65k/404k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 232.07MB/s 402 inodes used (0.10%, out of 405072) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 396 89417 blocks used (15.16%, out of 589824) 0 bad blocks 1 large file 220 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 392 files Memory used: 468k/0k (64k/405k), time: 0.04/ 0.02/ 0.01 I/O read: 2MB, write: 1MB, rate: 55.26MB/s start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre check files after shrinking the MDT and OST filesystems /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-101 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-101 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-102 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-102 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-103 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-103 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-104 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-104 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-105 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-105 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-106 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-106 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-107 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-107 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-108 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-108 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-109 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-109 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-110 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-110 has size 1048576 OK umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 78 (165s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 79: format MDT/OST without mgs option (should return errors) ========================================================== 20:27:00 (1713486420) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: oleg419-server: mkfs.lustre FATAL: Must specify --mgs or --mgsnode oleg419-server: mkfs.lustre: exiting with 22 (Invalid argument) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 22 oleg419-server: oleg419-server: mkfs.lustre FATAL: Must specify --mgs or --mgsnode oleg419-server: mkfs.lustre: exiting with 22 (Invalid argument) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 22 oleg419-server: oleg419-server: mkfs.lustre FATAL: Must specify --mgsnode oleg419-server: mkfs.lustre: exiting with 22 (Invalid argument) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 22 Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 79 (52s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 80: mgc import reconnect race ======== 20:27:53 (1713486473) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid fail_val=10 fail_loc=0x906 fail_val=10 fail_loc=0x906 start ost2 service on oleg419-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid fail_loc=0 stop ost2 service on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 80 (65s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 81: sparse OST indexing ============== 20:28:59 (1713486539) SKIP: conf-sanity test_81 needs >= 3 OSTs SKIP 81 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 82a: specify OSTs for file (succeed) or directory (succeed) ========================================================== 20:29:01 (1713486541) SKIP: conf-sanity test_82a needs >= 3 OSTs SKIP 82a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 82b: specify OSTs for file with --pool and --ost-list options ========================================================== 20:29:03 (1713486543) SKIP: conf-sanity test_82b needs >= 4 OSTs SKIP 82b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 83: ENOSPACE on OST doesn't cause message VFS: Busy inodes after unmount ... ========================================================== 20:29:06 (1713486546) mount the OST /dev/mapper/ost1_flakey as a ldiskfs filesystem mnt_opts -o loop run llverfs in partial mode on the OST ldiskfs /mnt/lustre-ost1 oleg419-server: oleg419-server.virtnet: executing run_llverfs /mnt/lustre-ost1 -vpl no oleg419-server: oleg419-server: llverfs: write /mnt/lustre-ost1/llverfs_dir00142/file000@0+1048576 short: 368640 written oleg419-server: Timestamp: 1713486548 oleg419-server: dirs: 147, fs blocks: 37602 oleg419-server: write_done: /mnt/lustre-ost1/llverfs_dir00142/file000, current: 320.942 MB/s, overall: 320.942 MB/s, ETA: 0:00:00 oleg419-server: oleg419-server: read_done: /mnt/lustre-ost1/llverfs_dir00141/file000, current: 3651.98 MB/s, overall: 3651.98 MB/s, ETA: 0:00:00 oleg419-server: unmount the OST /dev/mapper/ost1_flakey Stopping /mnt/lustre-ost1 (opts:) on oleg419-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg419-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: No space left on device pdsh@oleg419-client: oleg419-server: ssh exited with exit code 28 oleg419-server: error: set_param: param_path 'seq/cli-lustre': No such file or directory oleg419-server: error: set_param: setting 'seq/cli-lustre'='OST0000-super.width=65536': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 28 string err Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 4 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 83 (59s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 84: check recovery_hard_time ========= 20:30:06 (1713486606) start mds service on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 2 sec start ost2 service on oleg419-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec recovery_time=60, timeout=20, wrap_up=5 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre mount lustre on /mnt/lustre2..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre2 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1668 84924 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1524 126692 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1524 126692 2% /mnt/lustre[OST:1] filesystem_summary: 284432 3048 253384 2% /mnt/lustre total: 1000 open/close in 2.33 seconds: 429.48 ops/second fail_loc=0x20000709 fail_val=5 Failing mds1 on oleg419-server Stopping /mnt/lustre-mds1 (opts:) on oleg419-server 20:30:43 (1713486643) shut down Failover mds1 to oleg419-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Warning: skipping journal recovery because doing a read-only filesystem check. Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 26697 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 53372 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53373 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53374 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53375 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53376 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53377 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53378 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53379 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53380 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53381 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53382 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 264k/0k (140k/125k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 335.80MB/s [Thread 0] Scanned group range [0, 3), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (97k/168k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 338.87MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (97k/168k), time: 0.01/ 0.00/ 0.00 Pass 3: Memory used: 264k/0k (96k/169k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (67k/198k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Free blocks count wrong (25455, counted=25443). Fix? no Free inodes count wrong (79719, counted=79715). Fix? no Pass 5: Memory used: 264k/0k (67k/198k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 350.51MB/s 273 inodes used (0.34%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24545 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 150 regular files 117 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 267 files Memory used: 264k/0k (66k/199k), time: 0.02/ 0.01/ 0.01 I/O read: 1MB, write: 0MB, rate: 58.74MB/s mount facets: mds1 Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 20:30:57 (1713486657) targets are mounted 20:30:57 (1713486657) facet_failover done oleg419-client: error: invalid path '/mnt/lustre': Input/output error pdsh@oleg419-client: oleg419-client: ssh exited with exit code 5 recovery status status: COMPLETE recovery_start: 1713486656 recovery_duration: 60 completed_clients: 2/3 replayed_requests: 146 last_transno: 8589934738 VBR: DISABLED IR: DISABLED fail_loc=0 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) umount lustre on /mnt/lustre2..... Stopping client oleg419-client.virtnet /mnt/lustre2 (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop ost2 service on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 84 (133s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 85: osd_ost init: fail ea_fid_set ==== 20:32:20 (1713486740) fail_loc=0x197 start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server PASS 85 (69s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 86: Replacing mkfs.lustre -G option == 20:33:31 (1713486811) oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps params: --mgsnode=oleg419-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-G 1024 -b 4096 -O flex_bg -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Failing mds1 on oleg419-server 20:33:32 (1713486812) shut down Failover mds1 to oleg419-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 20:33:45 (1713486825) targets are mounted 20:33:45 (1713486825) facet_failover done pdsh@oleg419-client: oleg419-client: ssh exited with exit code 95 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid pdsh@oleg419-client: oleg419-client: ssh exited with exit code 95 Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -G 1024 -b 4096 -I 512 -q -O flex_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -G 1024 -b 4096 -I 512 -q -O flex_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 86 (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 87: check if MDT inode can hold EAs with N stripes properly ========================================================== 20:34:40 (1713486880) Estimate: at most 353-byte space left in inode. unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 setup single mount lustre success 4 -rw-r--r-- 1 root root 67108865 Apr 18 20:35 /mnt/lustre-mds1/ROOT/f87.conf-sanity Verified: at most 353-byte space left in inode. Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server PASS 87 (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 88: check the default mount options can be overridden ========================================================== 20:35:32 (1713486932) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Persistent mount opts: user_xattr,errors=remount-ro Persistent mount opts: user_xattr,errors=remount-ro Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=panic Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Persistent mount opts: user_xattr,errors=panic Persistent mount opts: user_xattr,errors=panic PASS 88 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 89: check tunefs --param and --erase-param{s} options ========================================================== 20:35:40 (1713486940) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) tunefs --param failover.node=192.0.2.254@tcp0 tunefs --param failover.node=192.0.2.255@tcp0 tunefs --erase-param failover.node tunefs --erase-params tunefs --param failover.node=192.0.2.254@tcp0 --erase-params Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL failover.node=192.0.2.254@tcp0,mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL failover.node=192.0.2.254@tcp0,mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) failover.node=192.0.2.254@tcp0,osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 89 (52s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 90a: check max_mod_rpcs_in_flight is enforced ========================================================== 20:36:33 (1713486993) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre max_mod_rcps_in_flight is 7 creating 8 files ... fail_loc=0x159 launch 6 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90a.conf-sanity/file-7 has perms 0600 OK fail_loc=0x159 launch 7 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90a.conf-sanity/file-8 has perms 0644 OK umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 90a (69s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 90b: check max_mod_rpcs_in_flight is enforced after update ========================================================== 20:37:44 (1713487064) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre mdc.lustre-MDT0000-mdc-ffff8800ab280000.max_mod_rpcs_in_flight=1 max_mod_rpcs_in_flight set to 1 creating 2 files ... fail_loc=0x159 launch 0 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity1/file-1 has perms 0600 OK fail_loc=0x159 launch 1 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity1/file-2 has perms 0644 OK mdc.lustre-MDT0001-mdc-ffff8800ab280000.max_mod_rpcs_in_flight=5 max_mod_rpcs_in_flight set to 5 creating 6 files ... fail_loc=0x159 launch 4 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity2/file-5 has perms 0600 OK fail_loc=0x159 launch 5 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity2/file-6 has perms 0644 OK mdt_max_mod_rpcs_in_flight is 8 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre mdc.lustre-MDT0000-mdc-ffff8800b6fae000.max_rpcs_in_flight=17 mdc.lustre-MDT0000-mdc-ffff8800b6fae000.max_mod_rpcs_in_flight=16 max_mod_rpcs_in_flight set to 16 creating 17 files ... fail_loc=0x159 launch 15 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity3/file-16 has perms 0600 OK fail_loc=0x159 launch 16 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity3/file-17 has perms 0644 OK error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 90b (134s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 90c: check max_mod_rpcs_in_flight update limits ========================================================== 20:39:59 (1713487199) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre max_rpcs_in_flight is 8 MDC max_mod_rpcs_in_flight is 7 mdt_max_mod_rpcs_in_flight is 8 error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved mdc.lustre-MDT0000-mdc-ffff8800a8ca1800.max_mod_rpcs_in_flight=8 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre mdc.lustre-MDT0000-mdc-ffff8800aa843000.max_rpcs_in_flight=10 error: set_param: setting /sys/fs/lustre/mdc/lustre-MDT0000-mdc-ffff8800aa843000/max_mod_rpcs_in_flight=9: Numerical result out of range error: set_param: setting 'mdc/lustre-MDT0000-mdc-*/max_mod_rpcs_in_flight'='9': Numerical result out of range Stopping client oleg419-client.virtnet /mnt/lustre (opts:) Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 90c (50s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 90d: check one close RPC is allowed above max_mod_rpcs_in_flight ========================================================== 20:40:51 (1713487251) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre max_mod_rcps_in_flight is 7 creating 7 files ... multiop /mnt/lustre/d90d.conf-sanity/file-close vO_c TMPPIPE=/tmp/multiop_open_wait_pipe.7504 fail_loc=0x159 launch 7 chmod in parallel ... fail_loc=0 launch 1 additional close in parallel ... umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 90d (64s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 91: evict-by-nid support ============= 20:41:56 (1713487316) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre setup single mount lustre success list nids on mdt: mdt.lustre-MDT0000.exports.0@lo mdt.lustre-MDT0000.exports.192.168.204.19@tcp mdt.lustre-MDT0000.exports.clear mdt.lustre-MDT0001.exports.0@lo mdt.lustre-MDT0001.exports.192.168.204.19@tcp mdt.lustre-MDT0001.exports.clear uuid from 192\.168\.204\.19@tcp: mdt.lustre-MDT0000.exports.192.168.204.19@tcp.uuid=bcb32681-0b11-494a-ab64-c9768eea17d1 mdt.lustre-MDT0001.exports.192.168.204.19@tcp.uuid=bcb32681-0b11-494a-ab64-c9768eea17d1 manual umount lustre on /mnt/lustre.... evict 192\.168\.204\.19@tcp oleg419-server: error: read_param: '/proc/fs/lustre/mdt/lustre-MDT0000/exports/192.168.204.19@tcp/uuid': No such device pdsh@oleg419-client: oleg419-server: ssh exited with exit code 19 oleg419-server: error: read_param: '/proc/fs/lustre/obdfilter/lustre-OST0000/exports/192.168.204.19@tcp/uuid': No such device pdsh@oleg419-client: oleg419-server: ssh exited with exit code 19 oleg419-server: error: read_param: '/proc/fs/lustre/mdt/lustre-MDT0000/exports/192.168.204.19@tcp/uuid': No such device pdsh@oleg419-client: oleg419-server: ssh exited with exit code 19 oleg419-server: error: read_param: '/proc/fs/lustre/obdfilter/lustre-OST0000/exports/192.168.204.19@tcp/uuid': No such device pdsh@oleg419-client: oleg419-server: ssh exited with exit code 19 umount lustre on /mnt/lustre..... stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 91 (82s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 92: ldev returns MGS NID correctly in command substitution ========================================================== 20:43:19 (1713487399) Host is oleg419-client.virtnet ----- /tmp/ldev.conf ----- oleg419-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg419-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg419-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg419-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg419-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg419-server oleg419-server@tcp --- END /tmp/nids --- -- START OF LDEV OUTPUT -- lustre-OST0001: oleg419-server@tcp lustre-MGS0000: oleg419-server@tcp lustre-MDT0000: oleg419-server@tcp lustre-OST0000: oleg419-server@tcp lustre-MDT0001: oleg419-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-OST0000: oleg419-server@tcp lustre-MGS0000: oleg419-server@tcp lustre-MDT0000: oleg419-server@tcp lustre-OST0001: oleg419-server@tcp lustre-MDT0001: oleg419-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-MGS0000: oleg419-server@tcp lustre-MDT0000: oleg419-server@tcp lustre-OST0000: oleg419-server@tcp lustre-OST0001: oleg419-server@tcp lustre-MDT0001: oleg419-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-MGS0000: oleg419-server@tcp lustre-OST0001: oleg419-server@tcp lustre-MDT0000: oleg419-server@tcp lustre-OST0000: oleg419-server@tcp lustre-MDT0001: oleg419-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-OST0000: oleg419-server@tcp lustre-MGS0000: oleg419-server@tcp lustre-OST0001: oleg419-server@tcp lustre-MDT0000: oleg419-server@tcp lustre-MDT0001: oleg419-server@tcp --- END OF LDEV OUTPUT --- pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 92 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 93: register mulitple MDT at the same time ========================================================== 20:43:23 (1713487403) SKIP: conf-sanity test_93 needs >= 3 MDTs SKIP 93 (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 94: ldev outputs correct labels for file system name query ========================================================== 20:43:25 (1713487405) ----- /tmp/ldev.conf ----- oleg419-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg419-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg419-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg419-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg419-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg419-server oleg419-server@tcp --- END /tmp/nids --- -- START OF LDEV OUTPUT -- lustre-MDT0000 lustre-MDT0001 lustre-MGS0000 lustre-OST0000 lustre-OST0001 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-MDT0000 lustre-MDT0001 lustre-MGS0000 lustre-OST0000 lustre-OST0001 --- END OF EXPECTED OUTPUT --- pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 94 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 95: ldev should only allow one label filter ========================================================== 20:43:29 (1713487409) ----- /tmp/ldev.conf ----- oleg419-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg419-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg419-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg419-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg419-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg419-server oleg419-server@tcp --- END /tmp/nids --- pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 95 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 96: ldev returns hostname and backend fs correctly in command sub ========================================================== 20:43:33 (1713487413) ----- /tmp/ldev.conf ----- oleg419-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg419-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg419-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg419-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg419-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg419-server oleg419-server@tcp --- END /tmp/nids --- -- START OF LDEV OUTPUT -- oleg419-server-ldiskfs oleg419-server-ldiskfs oleg419-server-ldiskfs oleg419-server-ldiskfs oleg419-server-ldiskfs --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- oleg419-server-ldiskfs oleg419-server-ldiskfs oleg419-server-ldiskfs oleg419-server-ldiskfs oleg419-server-ldiskfs --- END OF EXPECTED OUTPUT --- pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 96 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 97: ldev returns correct ouput when querying based on role ========================================================== 20:43:36 (1713487416) ----- /tmp/ldev.conf ----- oleg419-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg419-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg419-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg419-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg419-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg419-server oleg419-server@tcp --- END /tmp/nids --- MDT role -- START OF LDEV OUTPUT -- lustre-MDT0000 lustre-MDT0001 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-MDT0000 lustre-MDT0001 --- END OF EXPECTED OUTPUT --- OST role -- START OF LDEV OUTPUT -- lustre-OST0000 lustre-OST0001 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-OST0000 lustre-OST0001 --- END OF EXPECTED OUTPUT --- MGS role -- START OF LDEV OUTPUT -- lustre-MGS0000 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-MGS0000 --- END OF EXPECTED OUTPUT --- pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 97 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 98: Buffer-overflow check while parsing mount_opts ========================================================== 20:43:40 (1713487420) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre setup single mount lustre success error: mount options too long umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 98 (43s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 99: Adding meta_bg option ============ 20:44:24 (1713487464) oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps params: --mgsnode=oleg419-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O ^resize_inode,meta_bg -b 4096 -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O ^resize_inode,meta_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O ^resize_inode,meta_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps Filesystem features: has_journal ext_attr dir_index filetype meta_bg extent flex_bg large_dir sparse_super large_file huge_file uninit_bg dir_nlink quota project PASS 99 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 100: check lshowmount lists MGS, MDT, OST and 0@lo ========================================================== 20:44:36 (1713487476) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre setup single mount lustre success lustre-MDT0000: lustre-OST0000: umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 100 (52s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 101a: Race MDT->OST reconnection with create ========================================================== 20:45:29 (1713487529) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre seq.cli-lustre-OST0000-super.width=0x1ffffff - open/close 553 (time 1713487568.12 total 10.23 last 54.06) - open/close 1194 (time 1713487578.33 total 20.44 last 62.79) - open/close 1934 (time 1713487588.68 total 30.79 last 71.47) - open/close 2650 (time 1713487598.98 total 41.09 last 69.53) - open/close 3446 (time 1713487609.47 total 51.58 last 75.88) - open/close 4264 (time 1713487620.00 total 62.11 last 77.67) - open/close 5047 (time 1713487630.43 total 72.55 last 75.04) - open/close 5680 (time 1713487640.63 total 82.74 last 62.11) - open/close 6426 (time 1713487651.02 total 93.13 last 71.80) - open/close 7161 (time 1713487661.43 total 103.54 last 70.58) - open/close 7983 (time 1713487671.92 total 114.03 last 78.39) - open/close 8633 (time 1713487682.11 total 124.22 last 63.76) - open/close 9374 (time 1713487692.53 total 134.65 last 71.08) - open/close 10000 (time 1713487700.71 total 142.82 last 76.60) - open/close 10768 (time 1713487711.10 total 153.22 last 73.87) - open/close 11380 (time 1713487721.29 total 163.41 last 60.05) - open/close 12133 (time 1713487731.69 total 173.81 last 72.42) - open/close 12836 (time 1713487742.03 total 184.14 last 68.00) - open/close 13777 (time 1713487752.78 total 194.89 last 87.55) - open/close 14596 (time 1713487763.24 total 205.35 last 78.30) - open/close 15440 (time 1713487773.77 total 215.89 last 80.10) - open/close 16380 (time 1713487784.51 total 226.62 last 87.55) - open/close 18627 (time 1713487794.51 total 236.63 last 224.67) - open/close 20000 (time 1713487796.94 total 239.05 last 565.83) - open/close 25012 (time 1713487806.94 total 249.05 last 501.11) - open/close 30000 (time 1713487816.12 total 258.23 last 543.53) - open/close 35451 (time 1713487826.12 total 268.23 last 545.04) - open/close 40000 (time 1713487834.37 total 276.48 last 551.53) - open/close 45015 (time 1713487844.37 total 286.48 last 501.46) open(/mnt/lustre/d101a.conf-sanity/f101a.conf-sanity-49632) error: No space left on device total: 49632 open/close in 295.09 seconds: 168.19 ops/second - unlinked 0 (time 1713487853 ; total 0 ; last 0) - unlinked 10000 (time 1713487864 ; total 11 ; last 11) - unlinked 20000 (time 1713487874 ; total 21 ; last 10) - unlinked 30000 (time 1713487884 ; total 31 ; last 10) - unlinked 40000 (time 1713487894 ; total 41 ; last 10) unlink(/mnt/lustre/d101a.conf-sanity/f101a.conf-sanity-49632) error: No such file or directory total: 49632 unlinks in 51 seconds: 973.176453 unlinks/second umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 101a (395s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 101b: Race events DISCONNECT and ACTIVE in osp ========================================================== 20:52:06 (1713487926) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre fail_loc=0x80002107 fail_val=20 stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec oleg419-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8800a9d8a800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800a9d8a800.ost_server_uuid in FULL state after 0 sec umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 101b (80s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory SKIP: conf-sanity test_102 skipping excluded test 102 error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 103: rename filesystem name ========== 20:53:28 (1713488008) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a8b76800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a8b76800.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' disable quota as required oleg419-server: Pool lustre.pool1 created oleg419-server: Pool lustre.lustre created oleg419-server: OST lustre-OST0000_UUID added to pool lustre.lustre Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server rename lustre to mylustre checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: mylustre-MDT0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'mylustre-MDT0000' '/dev/mapper/mds1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: mylustre-MDT0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'mylustre-MDT0001' '/dev/mapper/mds2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: mylustre-OST0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 cmd: tune2fs -f -L 'mylustre-OST0000' '/dev/mapper/ost1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: mylustre-OST0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 cmd: tune2fs -f -L 'mylustre-OST0001' '/dev/mapper/ost2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started mylustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started mylustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-mylustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started mylustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-mylustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started mylustre-OST0001 mount mylustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/mylustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/mylustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/mylustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.mylustre-OST0000-osc-ffff88012aa9d800.idle_timeout=debug osc.mylustre-OST0001-osc-ffff88012aa9d800.idle_timeout=debug disable quota as required File: '/mnt/lustre/d103.conf-sanity/test-framework.sh' Size: 291280 Blocks: 576 IO Block: 4194304 regular file Device: c3aa56ceh/3282720462d Inode: 144115305952575491 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 20:54:04.000000000 -0400 Modify: 2024-04-18 20:54:04.000000000 -0400 Change: 2024-04-18 20:54:04.000000000 -0400 Birth: - Pool: mylustre.pool1 Pool: mylustre.lustre mylustre-OST0000_UUID mylustre-OST0000_UUID oleg419-server: OST mylustre-OST0001_UUID added to pool mylustre.lustre Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server rename mylustre to tfs checking for existing Lustre data: found Read previous values: Target: mylustre-MDT0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: tfs-MDT0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'tfs-MDT0000' '/dev/mapper/mds1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: mylustre-MDT0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: tfs-MDT0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'tfs-MDT0001' '/dev/mapper/mds2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: mylustre-OST0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: tfs-OST0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 cmd: tune2fs -f -L 'tfs-OST0000' '/dev/mapper/ost1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: mylustre-OST0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: tfs-OST0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 cmd: tune2fs -f -L 'tfs-OST0001' '/dev/mapper/ost2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started tfs-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started tfs-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-tfs-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started tfs-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-tfs-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started tfs-OST0001 mount tfs on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/tfs /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/tfs /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/tfs on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.tfs-OST0000-osc-ffff88012aa99000.idle_timeout=debug osc.tfs-OST0001-osc-ffff88012aa99000.idle_timeout=debug disable quota as required File: '/mnt/lustre/d103.conf-sanity/test-framework.sh' Size: 291280 Blocks: 576 IO Block: 4194304 regular file Device: 32e2fa5ah/853736026d Inode: 144115305952575491 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 20:54:04.000000000 -0400 Modify: 2024-04-18 20:54:04.000000000 -0400 Change: 2024-04-18 20:54:04.000000000 -0400 Birth: - Pool: tfs.pool1 Pool: tfs.lustre tfs-OST0000_UUID tfs-OST0001_UUID tfs-OST0000_UUID Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server rename tfs to lustre checking for existing Lustre data: found Read previous values: Target: tfs-MDT0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'lustre-MDT0000' '/dev/mapper/mds1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: tfs-MDT0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'lustre-MDT0001' '/dev/mapper/mds2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: tfs-OST0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 cmd: tune2fs -f -L 'lustre-OST0000' '/dev/mapper/ost1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: tfs-OST0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 cmd: tune2fs -f -L 'lustre-OST0001' '/dev/mapper/ost2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a9d8c000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a9d8c000.idle_timeout=debug disable quota as required PASS 103 (225s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 104a: Make sure user defined options are reflected in mount ========================================================== 20:57:15 (1713488235) mountfsopt: acl,user_xattr Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg419-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Starting mds1: -o localrecov,noacl /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 Starting mds2: -o localrecov,noacl /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre setfacl: /mnt/lustre: Operation not supported PASS 104a (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 104b: Mount uses last flock argument ========================================================== 20:58:23 (1713488303) mount lustre with opts flock,localflock on /mnt/lustre3..... Starting client: oleg419-client.virtnet: -o flock,localflock oleg419-server@tcp:/lustre /mnt/lustre3 192.168.204.119@tcp:/lustre on /mnt/lustre3 type lustre (rw,checksum,localflock,nouser_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) umount lustre on /mnt/lustre3..... Stopping client oleg419-client.virtnet /mnt/lustre3 (opts:) mount lustre with opts localflock,flock on /mnt/lustre3..... Starting client: oleg419-client.virtnet: -o localflock,flock oleg419-server@tcp:/lustre /mnt/lustre3 192.168.204.119@tcp:/lustre on /mnt/lustre3 type lustre (rw,checksum,flock,nouser_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) umount lustre on /mnt/lustre3..... Stopping client oleg419-client.virtnet /mnt/lustre3 (opts:) mount lustre with opts localflock,flock,noflock on /mnt/lustre3..... Starting client: oleg419-client.virtnet: -o localflock,flock,noflock oleg419-server@tcp:/lustre /mnt/lustre3 umount lustre on /mnt/lustre3..... Stopping client oleg419-client.virtnet /mnt/lustre3 (opts:) PASS 104b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 105: check file creation for ro and rw bind mnt pt ========================================================== 20:58:27 (1713488307) umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:-f) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local oleg419-server: rmmod: ERROR: Module lustre is in use pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 modules unloaded. Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre touch: cannot touch '/tmp/d105.conf-sanity/f105.conf-sanity': Read-only file system umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 105 (91s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory SKIP: conf-sanity test_106 skipping SLOW test 106 error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 107: Unknown config param should not fail target mounting ========================================================== 21:00:00 (1713488400) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid umount lustre on /mnt/lustre..... stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server stop mds service on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 107 (158s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 108a: migrate from ldiskfs to ZFS ==== 21:02:39 (1713488559) SKIP: conf-sanity test_108a zfs only test SKIP 108a (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 108b: migrate from ZFS to ldiskfs ==== 21:02:41 (1713488561) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' oleg419-server: 1+0 records in oleg419-server: 1+0 records out oleg419-server: 1048576 bytes (1.0 MB) copied, 0.00260964 s, 402 MB/s oleg419-server: 1+0 records in oleg419-server: 1+0 records out oleg419-server: 1048576 bytes (1.0 MB) copied, 0.00222288 s, 472 MB/s oleg419-server: 1+0 records in oleg419-server: 1+0 records out oleg419-server: 1048576 bytes (1.0 MB) copied, 0.00218191 s, 481 MB/s oleg419-server: 1+0 records in oleg419-server: 1+0 records out oleg419-server: 1048576 bytes (1.0 MB) copied, 0.00264822 s, 396 MB/s Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x45 (MDT MGS update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-MDT0000 kilobytes 200000 options -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-MDT0000 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x41 (MDT update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-MDT0001 kilobytes 200000 options -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-MDT0001 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-OST0000 kilobytes 200000 options -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-OST0001 kilobytes 200000 options -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0001 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata changing server nid... mounting mdt1 from backup... mounting mdt2 from backup... mounting ost1 from backup... mounting ost2 from backup... Started LFSCK on the device lustre-MDT0000: scrub Started LFSCK on the device lustre-MDT0001: scrub Started LFSCK on the device lustre-OST0000: scrub Started LFSCK on the device lustre-OST0001: scrub mounting client... check list total 12 drwxr-xr-x 2 root root 4096 Jan 20 2018 d1 -rw-r--r-- 1 root root 0 Jan 20 2018 f0 -rw-r--r-- 1 root root 4067 Jan 20 2018 README -rw-r--r-- 1 root root 331 Jan 20 2018 regression check truncate && write check create check read && write && append verify data done. cleanup... PASS 108b (70s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 109a: test lctl clear_conf fsname ==== 21:03:53 (1713488633) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Setting lustre-MDT0000.mdd.atime_diff from 60 to 62 Waiting 90s for '62' Setting lustre-MDT0000.mdd.atime_diff from 62 to 63 Waiting 90s for '63' Updated after 6s: want '63' got '63' Setting lustre.llite.max_read_ahead_mb from 256 to 32 Waiting 90s for '32' Updated after 9s: want '32' got '32' Setting lustre.llite.max_read_ahead_mb from 32 to 64 Waiting 90s for '64' Updated after 9s: want '64' got '64' oleg419-server: Pool lustre.pool1 created Waiting 90s for '' oleg419-server: OST lustre-OST0000_UUID added to pool lustre.pool1 oleg419-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg419-server: OST lustre-OST0000_UUID added to pool lustre.pool1 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov -o nosvc /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all Start /dev/mapper/mds1_flakey without service Started lustre-MDT0000 oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Destroy the created pools: pool1 lustre.pool1 oleg419-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg419-server: Pool lustre.pool1 destroyed Waiting 90s for 'foo' umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 109a (162s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 109b: test lctl clear_conf one config ========================================================== 21:06:37 (1713488797) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Setting lustre-MDT0000.mdd.atime_diff from 60 to 62 Waiting 90s for '62' Updated after 8s: want '62' got '62' Setting lustre-MDT0000.mdd.atime_diff from 62 to 63 Waiting 90s for '63' Updated after 5s: want '63' got '63' Setting lustre.llite.max_read_ahead_mb from 256 to 32 Waiting 90s for '32' Updated after 3s: want '32' got '32' Setting lustre.llite.max_read_ahead_mb from 32 to 64 Waiting 90s for '64' Updated after 9s: want '64' got '64' oleg419-server: Pool lustre.pool1 created Waiting 90s for '' oleg419-server: OST lustre-OST0000_UUID added to pool lustre.pool1 oleg419-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg419-server: OST lustre-OST0000_UUID added to pool lustre.pool1 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov -o nosvc /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all Start /dev/mapper/mds1_flakey without service Started lustre-MDT0000 oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg419-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Destroy the created pools: pool1 lustre.pool1 oleg419-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg419-server: Pool lustre.pool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 109b (165s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory SKIP: conf-sanity test_110 skipping ALWAYS excluded test 110 SKIP: conf-sanity test_111 skipping SLOW test 111 error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 112a: mount OST with no_create option ========================================================== 21:09:25 (1713488965) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid start ost2 service on oleg419-server Starting ost2: -o localrecov,no_create /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff88012d0d2800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012d0d2800.ost_server_uuid in FULL state after 0 sec oleg419-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff88012d0d2800.ost_server_uuid 50 osc.lustre-OST0001-osc-ffff88012d0d2800.ost_server_uuid in FULL state after 0 sec /mnt/lustre/f112a.conf-sanity.1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 67 0x43 0x280000401 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1704 84888 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1540 85052 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1528 126688 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1396 126820 2% /mnt/lustre[OST:1] N filesystem_summary: 284432 2924 253508 2% /mnt/lustre obdfilter.lustre-OST0001.no_create=0 stop ost2 service on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 112a (70s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 112b: mount MDT with no_create option ========================================================== 21:10:36 (1713489036) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start mds service on oleg419-server Starting mds2: -o localrecov -o no_create /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid start ost2 service on oleg419-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre oleg419-server: oleg419-server.virtnet: executing wait_import_state (FULL|IDLE) os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1704 84888 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1544 85048 2% /mnt/lustre[MDT:1] N lustre-OST0000_UUID 142216 1532 126684 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1532 126684 2% /mnt/lustre[OST:1] filesystem_summary: 284432 3064 253368 2% /mnt/lustre 100 0 mdt.lustre-MDT0001.no_create=0 1 0 99 1 stop ost2 service on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 112b (141s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 113: Shadow mountpoint correctly report ro/rw for mounts ========================================================== 21:12:59 (1713489179) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800ab66a000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800ab66a000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' disable quota as required /dev/mapper/mds1_flakey on /mnt/lustre-mds1 type lustre (rw,svname=lustre-MDT0000,mgs,osd=osd-ldiskfs,user_xattr,errors=remount-ro) /dev/mapper/mds2_flakey on /mnt/lustre-mds2 type lustre (rw,svname=lustre-MDT0001,mgsnode=192.168.204.119@tcp,osd=osd-ldiskfs) /dev/mapper/ost1_flakey on /mnt/lustre-ost1 type lustre (rw,svname=lustre-OST0000,mgsnode=192.168.204.119@tcp,osd=osd-ldiskfs) /dev/mapper/ost2_flakey on /mnt/lustre-ost2 type lustre (rw,svname=lustre-OST0001,mgsnode=192.168.204.119@tcp,osd=osd-ldiskfs) /dev/mapper/ost1_flakey on /mnt/lustre-ost1 type lustre (rw,svname=lustre-OST0000,mgsnode=192.168.204.119@tcp,osd=osd-ldiskfs) /dev/mapper/ost2_flakey on /mnt/lustre-ost2 type lustre (rw,svname=lustre-OST0001,mgsnode=192.168.204.119@tcp,osd=osd-ldiskfs) Shadow Mountpoint correctly reports rw for ldiskfs Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff880136987000.idle_timeout=debug osc.lustre-OST0001-osc-ffff880136987000.idle_timeout=debug disable quota as required Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server PASS 113 (150s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: conf-sanity test_114 skipping SLOW test 114 SKIP: conf-sanity test_115 skipping excluded test 115 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 116: big size MDT support ============ 21:15:32 (1713489332) /usr/sbin/mkfs.xfs Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions meta-data=/tmp/f116.conf-sanity-mdt0 isize=512 agcount=4, agsize=67108864 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=268435456, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=131072, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: not found formatting backing filesystem ldiskfs on /dev/loop1 target name lustre:MDT0000 kilobytes 18253611008 options -i 16777216 -b 4096 -J size=4096 -I 1024 -q -O uninit_bg,extents,dirdata,dir_nlink,quota,project,huge_file,64bit,^resize_inode,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,lazy_itable_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -i 16777216 -b 4096 -J size=4096 -I 1024 -q -O uninit_bg,extents,dirdata,dir_nlink,quota,project,huge_file,64bit,^resize_inode,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,lazy_itable_init,packed_meta_blocks -F /dev/loop1 18253611008k Writing CONFIGS/mountdata Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 116 (84s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 117: lctl get_param return errors properly ========================================================== 21:16:58 (1713489418) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre ost.OSS.ost_io.nrs_policies=fifo oleg419-server: error: read_param: '/sys/kernel/debug/lustre/ost/OSS/ost_io/nrs_tbf_rule': No such device pdsh@oleg419-client: oleg419-server: ssh exited with exit code 19 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 117 (39s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 119: writeconf on slave mdt shouldn't duplicate mdc/osp and crash ========================================================== 21:17:38 (1713489458) oleg419-server: error: get_param: param_path 'debug': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 oleg419-server: error: set_param: param_path 'debug': No such file or directory oleg419-server: error: set_param: setting 'debug'='+config': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: opening /dev/lnet failed: No such file or directory oleg419-server: hint: the kernel modules may not be loaded oleg419-server: IOC_LIBCFS_CLEAR_DEBUG failed: No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server debug_mb=84 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 300s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 290s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 280s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 270s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 260s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 250s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 240s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 230s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 220s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 210s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 200s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 190s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 180s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 170s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 150s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 130s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 120s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 110s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 90s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 80s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 70s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 60s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 40s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 30s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 20s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 10s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 0s for '1' Update not seen after 300s: want '1' got '0' stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server debug_mb=84 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 300s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 280s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 270s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 260s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 250s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 240s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 230s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 220s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 200s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 190s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 180s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 170s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 160s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 150s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 140s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 130s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 120s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 100s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 90s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 70s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 60s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 40s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 30s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 10s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 0s for '1' Update not seen after 300s: want '1' got '0' stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server debug_mb=84 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 300s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 280s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 260s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 250s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 240s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 230s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 220s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 210s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 190s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 180s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 170s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 160s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 150s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 130s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 120s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 100s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 90s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 80s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 70s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 40s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 30s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 20s for '1' pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Waiting 0s for '1' Update not seen after 300s: want '1' got '0' debug_mb=21 debug_mb=21 debug=-config Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg419-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey PASS 119 (1011s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 120: cross-target rename should not create bad symlinks ========================================================== 21:34:31 (1713490471) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg419-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg419-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 26697 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 53372 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53373 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53374 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53375 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53376 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53378 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53379 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53380 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53381 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53382 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (140k/129k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 359.71MB/s [Thread 0] Scanned group range [0, 3), inodes 280 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (97k/172k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 278.47MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (98k/171k), time: 0.01/ 0.00/ 0.01 Pass 3: Memory used: 268k/0k (96k/173k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (67k/202k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (67k/202k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 372.72MB/s 279 inodes used (0.35%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24583 blocks used (49.17%, out of 50000) 0 bad blocks 1 large file 149 regular files 119 directories 0 character device files 0 block device files 0 fifos 0 links 1 symbolic link (1 fast symbolic link) 0 sockets ------------ 269 files Memory used: 268k/0k (66k/203k), time: 0.02/ 0.01/ 0.01 I/O read: 1MB, write: 0MB, rate: 52.48MB/s PASS 120 (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 121: failover MGS ==================== 21:35:23 (1713490523) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Failing mgs on oleg419-server Stopping /mnt/lustre-mds1 (opts:) on oleg419-server 21:35:35 (1713490535) shut down Failover mgs to oleg419-server mount facets: mgs Starting mgs: /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 21:35:49 (1713490549) targets are mounted 21:35:49 (1713490549) facet_failover done pdsh@oleg419-client: oleg419-client: ssh exited with exit code 95 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mgc.*.mgs_server_uuid pdsh@oleg419-client: oleg419-client: ssh exited with exit code 95 stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Failing mgs on oleg419-server Stopping /mnt/lustre-mds1 (opts:) on oleg419-server 21:36:13 (1713490573) shut down Failover mgs to oleg419-server mount facets: mgs Starting mgs: /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 21:36:26 (1713490586) targets are mounted 21:36:26 (1713490586) facet_failover done pdsh@oleg419-client: oleg419-client: ssh exited with exit code 95 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mgc.*.mgs_server_uuid pdsh@oleg419-client: oleg419-client: ssh exited with exit code 95 stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 121 (77s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 122a: Check OST sequence update ====== 21:36:42 (1713490602) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions fail_loc=0x00001e0 start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre fail_loc=0 total: 1000 open/close in 3.58 seconds: 279.52 ops/second umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 122a (80s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 123aa: llog_print works with FIDs and simple names ========================================================== 21:38:04 (1713490684) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre 1 UP mgs MGS MGS 7 - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 13, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 14, event: attach, device: lustre-MDT0000-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 15, event: setup, device: lustre-MDT0000-mdc, UUID: lustre-MDT0000_UUID, node: 192.168.204.119@tcp } - { index: 16, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0000_UUID, index: 0, gen: 1, UUID: lustre-MDT0000-mdc_UUID } - { index: 22, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 23, event: attach, device: lustre-MDT0001-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 24, event: setup, device: lustre-MDT0001-mdc, UUID: lustre-MDT0001_UUID, node: 192.168.204.119@tcp } - { index: 25, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0001_UUID, index: 1, gen: 1, UUID: lustre-MDT0001-mdc_UUID } - { index: 31, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID } - { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.204.119@tcp } - { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 } - { index: 37, event: set_timeout, num: 0x000014, parameter: sys.timeout=20 } PASS 123aa (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ab: llog_print params output values from set_param -P ========================================================== 21:38:41 (1713490721) PASS 123ab (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ac: llog_print with --start and --end ========================================================== 21:38:46 (1713490726) PASS 123ac (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ad: llog_print shows all records == 21:38:51 (1713490731) PASS 123ad (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ae: llog_cancel can cancel requested record ========================================================== 21:38:57 (1713490737) - { index: 11, event: set_param, device: general, parameter: osc.*.max_dirty_mb, value: 467 } - { index: 46, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_dirty_mb=467 } - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 13, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 14, event: attach, device: lustre-MDT0000-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 15, event: setup, device: lustre-MDT0000-mdc, UUID: lustre-MDT0000_UUID, node: 192.168.204.119@tcp } - { index: 16, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0000_UUID, index: 0, gen: 1, UUID: lustre-MDT0000-mdc_UUID } - { index: 22, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 23, event: attach, device: lustre-MDT0001-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 24, event: setup, device: lustre-MDT0001-mdc, UUID: lustre-MDT0001_UUID, node: 192.168.204.119@tcp } - { index: 25, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0001_UUID, index: 1, gen: 1, UUID: lustre-MDT0001-mdc_UUID } - { index: 31, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID } - { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.204.119@tcp } - { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 } - { index: 37, event: set_timeout, num: 0x000014, parameter: sys.timeout=20 } - { index: 43, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_pages_per_rpc=1024 } - { index: 46, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_dirty_mb=467 } - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 13, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 14, event: attach, device: lustre-MDT0000-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 15, event: setup, device: lustre-MDT0000-mdc, UUID: lustre-MDT0000_UUID, node: 192.168.204.119@tcp } - { index: 16, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0000_UUID, index: 0, gen: 1, UUID: lustre-MDT0000-mdc_UUID } - { index: 22, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 23, event: attach, device: lustre-MDT0001-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 24, event: setup, device: lustre-MDT0001-mdc, UUID: lustre-MDT0001_UUID, node: 192.168.204.119@tcp } - { index: 25, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0001_UUID, index: 1, gen: 1, UUID: lustre-MDT0001-mdc_UUID } - { index: 31, event: add_uuid, nid: 192.168.204.119@tcp(0x20000c0a8cc77), node: 192.168.204.119@tcp } - { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID } - { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.204.119@tcp } - { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 } - { index: 37, event: set_timeout, num: 0x000014, parameter: sys.timeout=20 } - { index: 43, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_pages_per_rpc=1024 } PASS 123ae (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123af: llog_catlist can show all config files correctly ========================================================== 21:39:08 (1713490748) lctl --device MGS llog_catlist ... orig_clist: lustre-OST0000 lustre-MDT0001 lustre-client lustre-MDT0000 fail_loc=0x131b fail_val=2 new_clist: lustre-MDT0001 lustre-client lustre-MDT0000 fail_loc=0 done lctl --device lustre-MDT0000 llog_catlist ... orig_clist: [0x1:0x2:0x0] fail_loc=0x131b fail_val=2 new_clist: fail_loc=0 done fail_loc=0 PASS 123af (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ag: llog_print skips values deleted by set_param -P -d ========================================================== 21:39:16 (1713490756) PASS 123ag (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ah: del_ost cancels config log entries correctly ========================================================== 21:39:22 (1713490762) del_ost: dry run for target lustre-OST0000 config_log: lustre-MDT0001 [DRY RUN] cancel catalog 'lustre-MDT0001:38':"- { index: 38, event: conf_param, device: lustre-OST0000-osc-MDT0001, parameter: osc.max_dirty_mb=467 }" [DRY RUN] cancel catalog 'lustre-MDT0001:26':"- { index: 26, event: add_osc, device: lustre-MDT0001-mdtlov, ost: lustre-OST0000_UUID, index: 0, gen: 1 }" [DRY RUN] cancel catalog 'lustre-MDT0001:25':"- { index: 25, event: setup, device: lustre-OST0000-osc-MDT0001, UUID: lustre-OST0000_UUID, node: 192.168.204.119@tcp }" [DRY RUN] cancel catalog 'lustre-MDT0001:24':"- { index: 24, event: attach, device: lustre-OST0000-osc-MDT0001, type: osc, UUID: lustre-MDT0001-mdtlov_UUID }" del_ost: no catalog entry deleted config_log: lustre-client [DRY RUN] cancel catalog 'lustre-client:34':"- { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 }" [DRY RUN] cancel catalog 'lustre-client:33':"- { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.204.119@tcp }" [DRY RUN] cancel catalog 'lustre-client:32':"- { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID }" del_ost: no catalog entry deleted config_log: lustre-MDT0000 [DRY RUN] cancel catalog 'lustre-MDT0000:41':"- { index: 41, event: conf_param, device: lustre-OST0000-osc-MDT0000, parameter: osc.max_dirty_mb=467 }" [DRY RUN] cancel catalog 'lustre-MDT0000:29':"- { index: 29, event: add_osc, device: lustre-MDT0000-mdtlov, ost: lustre-OST0000_UUID, index: 0, gen: 1 }" [DRY RUN] cancel catalog 'lustre-MDT0000:28':"- { index: 28, event: setup, device: lustre-OST0000-osc-MDT0000, UUID: lustre-OST0000_UUID, node: 192.168.204.119@tcp }" [DRY RUN] cancel catalog 'lustre-MDT0000:27':"- { index: 27, event: attach, device: lustre-OST0000-osc-MDT0000, type: osc, UUID: lustre-MDT0000-mdtlov_UUID }" del_ost: no catalog entry deleted config_log: lustre-MDT0001 cancel catalog lustre-MDT0001 log_idx 38: done cancel catalog lustre-MDT0001 log_idx 26: done cancel catalog lustre-MDT0001 log_idx 25: done cancel catalog lustre-MDT0001 log_idx 24: done del_ost: cancelled 4 catalog entries config_log: lustre-client cancel catalog lustre-client log_idx 34: done cancel catalog lustre-client log_idx 33: done cancel catalog lustre-client log_idx 32: done del_ost: cancelled 3 catalog entries config_log: lustre-MDT0000 cancel catalog lustre-MDT0000 log_idx 41: done cancel catalog lustre-MDT0000 log_idx 29: done cancel catalog lustre-MDT0000 log_idx 28: done cancel catalog lustre-MDT0000 log_idx 27: done del_ost: cancelled 4 catalog entries umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server fail_loc=0 PASS 123ah (92s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ai: llog_print display all non skipped records ========================================================== 21:40:56 (1713490856) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre oleg419-server: params: OBD_IOC_LLOG_PRINT failed: No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 - { index: 394, event: set_param, device: general, parameter: timeout, value: 129 } cleanup test 123ai timeout=20 timeout=20 PASS 123ai (67s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123F: clear and reset all parameters using set_param -F ========================================================== 21:42:05 (1713490925) oleg419-server: rm: cannot remove '/tmp/f123F.conf-sanity.yaml': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Unmounting FS Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Writeconf checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Remounting start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Setting configuration parameters This option left for backward compatibility, please use 'lctl apply_yaml' instead set_param: mdt.lustre-MDT0000.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: mdt.lustre-MDT0001.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: jobid_var=TESTNAME umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 123F (83s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 123G: clear and reset all parameters using apply_yaml ========================================================== 21:43:30 (1713491010) start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre oleg419-server: rm: cannot remove '/tmp/f123G.conf-sanity.yaml': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Unmounting FS Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Writeconf checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Remounting start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Setting configuration parameters conf_param: lustre-MDT0000.mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity conf_param: lustre-MDT0001.mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: mdt.lustre-MDT0000.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: mdt.lustre-MDT0001.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: jobid_var=TESTNAME umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 123G (105s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 124: check failover after replace_nids ========================================================== 21:45:17 (1713491117) SKIP: conf-sanity test_124 needs MDT failover setup SKIP 124 (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 125: check l_tunedisk only tunes OSTs and their slave devices ========================================================== 21:45:20 (1713491120) Before: mgs /dev/mapper/mds1_flakey 511 2147483647 After: mgs /dev/mapper/mds1_flakey 511 2147483647 Before: ost1 /dev/mapper/ost1_flakey 16383 2147483647 oleg419-server: l_tunedisk: increased '/sys/devices/virtual/block/dm-2/queue/max_sectors_kb' from 16383 to 16384 After: ost1 /dev/mapper/ost1_flakey 16384 2147483647 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 125 (13s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 126: mount in parallel shouldn't cause a crash ========================================================== 21:45:34 (1713491134) umount lustre on /mnt/lustre..... stop ost1 service on oleg419-server stop mds service on oleg419-server stop mds service on oleg419-server LNET unconfigure error 22: (null) unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local oleg419-server: LNET unconfigure error 22: (null) modules unloaded. oleg419-server: oleg419-server.virtnet: executing load_module ../libcfs/libcfs/libcfs fail_loc=0x60d oleg419-server: oleg419-server.virtnet: executing load_modules oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 clearing fail_loc on mds1 fail_loc=0 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 PASS 126 (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 127: direct io overwrite on full ost ========================================================== 21:46:11 (1713491171) umount lustre on /mnt/lustre..... stop ost1 service on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. start mds service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Stopping clients: /mnt/lustre (opts:) pdsh@oleg419-client: no remote hosts specified check osc.lustre-OST0000-osc-MDT0000.active target updated after 0 sec (got 1) check osc.lustre-OST0000-osc-MDT0001.active target updated after 0 sec (got 1) dd: error writing '/mnt/lustre/f127.conf-sanity': No space left on device 124+0 records in 123+0 records out 128974848 bytes (129 MB) copied, 4.48263 s, 28.8 MB/s 123+0 records in 123+0 records out 128974848 bytes (129 MB) copied, 4.22596 s, 30.5 MB/s PASS 127 (70s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 128: Force using remote logs with --nolocallogs ========================================================== 21:47:24 (1713491244) SKIP: conf-sanity test_128 need separate mgs device SKIP 128 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 129: attempt to connect an OST with the same index should fail ========================================================== 21:47:27 (1713491247) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Format ost1: /dev/mapper/ost1_flakey Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg419-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Address already in use oleg419-server: The target service's index is already in use. (/dev/mapper/ost1_flakey) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 98 oleg419-server: error: set_param: param_path 'seq/cli-lustre:OST0000-super/width': No such file or directory oleg419-server: error: set_param: setting 'seq/cli-lustre:OST0000-super/width'='65536': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 98 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg419-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Address already in use oleg419-server: The target service's index is already in use. (/dev/mapper/ost1_flakey) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 98 oleg419-server: error: set_param: param_path 'seq/cli-lustre:OST0000-super/width': No such file or directory oleg419-server: error: set_param: setting 'seq/cli-lustre:OST0000-super/width'='65536': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 98 checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x22 (OST first_time ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x122 (OST first_time writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 Writing CONFIGS/mountdata Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 129 (59s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 130: re-register an MDT after writeconf ========================================================== 21:48:28 (1713491308) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6fa9000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6fa9000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 5s: want 'procname_uid' got 'procname_uid' disable quota as required stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Writing CONFIGS/mountdata start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 PASS 130 (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 131: MDT backup restore with project ID ========================================================== 21:49:23 (1713491363) oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg419-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg419-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of /dev/mapper/ost2_flakey on ost2 failed 17 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre mount.lustre: according to /etc/mtab oleg419-server@tcp:/lustre is already mounted on /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6fa9000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6fa9000.idle_timeout=debug disable quota as required striped dir -i1 -c2 -H crush2 /mnt/lustre/d131.conf-sanity total: 512 open/close in 2.10 seconds: 243.73 ops/second striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d131.conf-sanity.inherit total: 128 open/close in 0.52 seconds: 247.53 ops/second Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server file-level backup/restore on mds1:/dev/mapper/mds1_flakey backup data reformat new device Format mds1: /dev/mapper/mds1_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' file-level backup/restore on mds2:/dev/mapper/mds2_flakey backup data reformat new device Format mds2: /dev/mapper/mds2_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800af577800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800af577800.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' disable quota as required PASS 131 (133s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 132: hsm_actions processed after failover ========================================================== 21:51:39 (1713491499) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg419-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x45 (MDT MGS update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity mdt.hsm_control=enabled Writing CONFIGS/mountdata start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server PASS 132 (71s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 133: stripe QOS: free space balance in a pool ========================================================== 21:52:51 (1713491571) SKIP: conf-sanity test_133 needs >= 4 OSTs SKIP 133 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 134: check_iam works without faults == 21:52:54 (1713491574) count 1 NO ERRORS dd if=/dev/urandom of=/tmp/d134.conf-sanity/oi.16.61 bs=2 conv=notrunc count=1 seek=4 Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 10431, recsize 8, ptrsize 4, indirect_levels 0 Too large record + key or too small block, 10443, 4096 Root node is insane FINISHED WITH ERRORS 255 debugfs 1.46.2.wc5 (26-Mar-2022) /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS dd if=/dev/urandom of=/tmp/d134.conf-sanity/oi.16.62 bs=2 conv=notrunc count=1 seek=17 Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS 0 debugfs 1.46.2.wc5 (26-Mar-2022) /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS dd if=/dev/urandom of=/tmp/d134.conf-sanity/oi.16.63 bs=2 conv=notrunc count=1 seek=35 Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS 0 PASS 134 (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 135: check the behavior when changelog is wrapped around ========================================================== 21:53:21 (1713491601) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre oleg419-client: fail_loc=0x1312 oleg419-client: fail_val=5 oleg419-server: fail_loc=0x1312 oleg419-server: fail_val=5 striped dir -i0 -c1 -H crush2 /mnt/lustre/d135.conf-sanity mdd.lustre-MDT0000.changelog_mask=ALL mdd.lustre-MDT0001.changelog_mask=ALL mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' Wrap arround changelog catalog - open/close 2399 (time 1713491655.32 total 10.00 last 239.89) total: 4500 open/close in 18.80 seconds: 239.32 ops/second lustre-MDT0000: clear the changelog for cl1 to record #12998 - /unlink 4478 (time 1713491675.33 total 10.00 last 447.70) total: 4500 /unlink in 10.05 seconds: 447.70 ops/second lustre-MDT0000: clear the changelog for cl1 to record #25998 - open/close 2360 (time 1713491686.69 total 10.00 last 235.96) total: 4500 open/close in 18.82 seconds: 239.08 ops/second total: 4500 /unlink in 9.99 seconds: 450.63 ops/second lustre-MDT0000: clear the changelog for cl1 to record #38998 - open/close 2400 (time 1713491718.10 total 10.00 last 239.94) total: 4500 open/close in 18.78 seconds: 239.64 ops/second lustre-MDT0000: clear the changelog for cl1 to record #51998 total: 4500 /unlink in 9.84 seconds: 457.30 ops/second kill changelog reader /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 10632: 7492 Terminated coproc COPROC $LFS changelog --follow $service (wd: ~) lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 Cleanup test_135 umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. PASS 135 (173s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 136: don't panic with bad obdecho setup ========================================================== 21:56:16 (1713491776) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre oleg419-server: error: setup: Invalid argument pdsh@oleg419-client: oleg419-server: ssh exited with exit code 22 oleg419-server: error: test_mkdir: No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 136 (127s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 140: remove_updatelog script actions ========================================================== 21:58:25 (1713491905) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre striped dir -i0 -c2 -H crush2 /mnt/lustre/d140.conf-sanity stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Dry run was requested, no changes will be applied Scan update_log at '/mnt/lustre-mds2': Selected MDTS: 0 1 Processing MDT0 llog catalog [0x240000401:0x1:0x0] ... rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x2:0x0] rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x3:0x0] > /mnt/lustre-mds2/update_log_dir/[0x240000401:0x1:0x0] Processing MDT1 llog catalog [0x240000400:0x1:0x0] ... remove_updatelog: /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] is too small. > /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] Dry run was requested, no changes will be applied Scan update_log at '/mnt/lustre-mds2': Selected MDTS: 1 0 Processing MDT1 llog catalog [0x240000400:0x1:0x0] ... remove_updatelog: /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] is too small. > /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] Processing MDT0 llog catalog [0x240000401:0x1:0x0] ... rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x2:0x0] rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x3:0x0] > /mnt/lustre-mds2/update_log_dir/[0x240000401:0x1:0x0] Scan update_log at '/mnt/lustre-mds2': Selected MDTS: 0 Processing MDT0 llog catalog [0x240000401:0x1:0x0] ... rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x2:0x0] rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x3:0x0] > /mnt/lustre-mds2/update_log_dir/[0x240000401:0x1:0x0] start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 1 sec oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 4 sec Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:) Stopping client oleg419-client.virtnet /mnt/lustre opts: Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 140 (235s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 150: test setting max_cached_mb to a % ========================================================== 22:02:22 (1713492142) start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre llite.lustre-ffff8800abd6e000.max_cached_mb=100% llite.lustre-ffff8800abd6e000.max_cached_mb= users: 5 max_cached_mb: 3730 used_mb: 0 unused_mb: 3730 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 total ram mb: 3730 llite.lustre-ffff8800abd6e000.max_cached_mb=50% llite.lustre-ffff8800abd6e000.max_cached_mb= users: 5 max_cached_mb: 1865 used_mb: 0 unused_mb: 1865 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 error: set_param: setting /sys/kernel/debug/lustre/llite/lustre-ffff8800abd6e000/max_cached_mb=105%: Numerical result out of range error: set_param: setting 'llite/*/max_cached_mb'='105%': Numerical result out of range llite.lustre-ffff8800abd6e000.max_cached_mb=0% llite.lustre-ffff8800abd6e000.max_cached_mb= users: 5 max_cached_mb: 64 used_mb: 0 unused_mb: 64 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 PASS 150 (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 151: damaged local config doesn't prevent mounting ========================================================== 22:02:47 (1713492167) umount lustre on /mnt/lustre..... Stopping client oleg419-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. Damage ost1 local config log oleg419-server: debugfs 1.46.2.wc5 (26-Mar-2022) start ost1 service on oleg419-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg419-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: No such file or directory oleg419-server: Is the MGS specification correct? oleg419-server: Is the filesystem name correct? oleg419-server: If upgrading, is the copied client log valid? (see upgrade docs) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'seq/cli-lustre-OST0000-super/width': No such file or directory oleg419-server: error: set_param: setting 'seq/cli-lustre-OST0000-super/width'='65536': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 2 start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server PASS 151 (186s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 152: seq allocation error in OSP ===== 22:05:55 (1713492355) Checking servers environments Checking clients oleg419-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Starting client oleg419-client.virtnet: -o user_xattr,flock oleg419-server@tcp:/lustre /mnt/lustre Started clients oleg419-client.virtnet: 192.168.204.119@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6011800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6011800.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 5s: want 'procname_uid' got 'procname_uid' disable quota as required striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d152.conf-sanity ADD OST3 Permanent disk data: Target: lustre:OST0003 Index: 3 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.204.119@tcp sys.timeout=20 formatting backing filesystem ldiskfs on /dev/loop0 target name lustre:OST0003 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0003 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata fail_loc=0x80002109 fail_val=2 START OST3 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Starting ost3: -o localrecov /dev/mapper/ost3_flakey /mnt/lustre-ost3 STOP OST3 seq.cli-lustre-OST0003-super.width=65536 Stopping /mnt/lustre-ost3 (opts:) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all 4086 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /tmp/lustre-ost3 4086 Started lustre-OST0003 fail_loc=0 START OST3 again Starting ost3: -o localrecov /dev/mapper/ost3_flakey /mnt/lustre-ost3 seq.cli-lustre-OST0003-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0003 /mnt/lustre/d152.conf-sanity/f152.conf-sanity-2 lmm_magic: 0x0BD10BD0 lmm_seq: 0x240000bd0 lmm_object_id: 0x3 lmm_fid: [0x240000bd0:0x3:0x0] lmm_stripe_count: 3 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 3 obdidx objid objid group 3 2 0x2 0x300000bd0 0 35 0x23 0x280000400 1 3 0x3 0x2c0000400 Stopping /mnt/lustre-ost3 (opts:) on oleg419-server PASS 152 (72s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 153a: bypass invalid NIDs quickly ==== 22:07:09 (1713492429) Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg419-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg419-server oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg419-server: oleg419-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg419-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server start mds service on oleg419-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg419-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg419-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg419-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg419-server: oleg419-server.virtnet: executing set_default_debug -1 all pdsh@oleg419-client: oleg419-server: ssh exited with exit code 1 Started lustre-OST0000 oleg419-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid waiting for mount ... "192.168.204.119@tcp": { connects: 1, replied: 1, uptodate: true, sec_ago: 5 } "192.168.252.112@tcp": { connects: 0, replied: 0, uptodate: false, sec_ago: never } "10.252.252.113@tcp": { connects: 0, replied: 0, uptodate: false, sec_ago: never } "192.168.204.119@tcp": { connects: 0, replied: 0, uptodate: false, sec_ago: never } setup single mount lustre success umount lustre on /mnt/lustre..... stop ost1 service on oleg419-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg419-server stop mds service on oleg419-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg419-server unloading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-client: ssh exited with exit code 2 pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 PASS 153a (230s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg419-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 802a: simulate readonly device ======= 22:11:01 (1713492661) SKIP: conf-sanity test_802a ZFS specific test SKIP 802a (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg419-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg419-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory Stopping clients: oleg419-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg419-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg419-client: oleg419-server: ssh exited with exit code 2 oleg419-server: oleg419-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg419-server' oleg419-server: oleg419-server.virtnet: executing load_modules_local oleg419-server: Loading modules from /home/green/git/lustre-release/lustre oleg419-server: detected 4 online CPUs by sysfs oleg419-server: Force libcfs to create 2 CPU partitions oleg419-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg419-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey == conf-sanity test complete, duration 6903 sec ========== 22:11:27 (1713492687) === conf-sanity: start cleanup 22:11:28 (1713492688) === === conf-sanity: finish cleanup 22:11:28 (1713492688) ===