-----============= acceptance-small: conf-sanity ============----- Thu Apr 18 03:23:30 EDT 2024 excepting tests: 102 106 115 32newtarball 110 41c skipping tests SLOW=no: 45 69 106 111 114 Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg228-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76a: set permanent params with lctl across mounts ========================================================== 03:25:02 (1713425102) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Change MGS params max_dirty_mb: 467 new_max_dirty_mb: 457 Waiting 90s for '457' Updated after 3s: want '457' got '457' 457 Check the value is stored after remount Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012a5f1000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012a5f1000.idle_timeout=debug disable quota as required Change OST params client_cache_count: 128 new_client_cache_count: 256 Waiting 90s for '256' 256 Check the value is stored after remount Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012e714000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012e714000.idle_timeout=debug disable quota as required 256 Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server PASS 76a (164s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76b: verify params log setup correctly ========================================================== 03:27:48 (1713425268) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012e711800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012e711800.idle_timeout=debug disable quota as required mgs.MGS.live.params= fsname: params flags: 0x20 gen: 2 Secure RPC Config Rules: imperative_recovery_state: state: startup nonir_clients: 0 nidtbl_version: 2 notify_duration_total: 0.000000000 notify_duation_max: 0.000000000 notify_count: 0 Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server PASS 76b (64s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76c: verify changelog_mask is applied with lctl set_param -P ========================================================== 03:28:54 (1713425334) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a941d800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a941d800.idle_timeout=debug disable quota as required Change changelog_mask pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Check the value is stored after mds remount stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 20 sec oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server PASS 76c (105s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 76d: verify llite.*.xattr_cache can be set by 'lctl set_param -P' correctly ========================================================== 03:30:41 (1713425441) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b608a000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b608a000.idle_timeout=debug disable quota as required lctl set_param -P llite.*.xattr_cache=0 Waiting 90s for '0' Updated after 2s: want '0' got '0' Check llite.*.xattr_cache on client /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Check llite.*.xattr_cache on the new client /mnt/lustre2 mount lustre on /mnt/lustre2..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre2 umount lustre on /mnt/lustre2..... Stopping client oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server PASS 76d (60s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 77: comma-separated MGS NIDs and failover node NIDs ========================================================== 03:31:43 (1713425503) SKIP: conf-sanity test_77 mixed loopback and real device not working SKIP 77 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 78: run resize2fs on MDT and OST filesystems ========================================================== 03:31:45 (1713425505) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format ost1: /dev/mapper/ost1_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre create test files UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 83240 1616 73832 3% /mnt/lustre[MDT:0] lustre-OST0000_UUID 124712 1388 110724 2% /mnt/lustre[OST:0] filesystem_summary: 124712 1388 110724 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 72000 272 71728 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 45008 302 44706 1% /mnt/lustre[OST:0] filesystem_summary: 44978 272 44706 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0409613 s, 25.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0409726 s, 25.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0449209 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0412142 s, 25.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0522391 s, 20.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0466135 s, 22.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0417624 s, 25.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0626793 s, 16.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0433001 s, 24.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0468774 s, 22.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0361915 s, 29.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0414186 s, 25.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0421619 s, 24.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0400909 s, 26.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0429159 s, 24.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0432439 s, 24.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0475719 s, 22.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0411891 s, 25.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0453935 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0461819 s, 22.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0411977 s, 25.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0382204 s, 27.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438569 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0551267 s, 19.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0506343 s, 20.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0424737 s, 24.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0450485 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0388237 s, 27.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0428078 s, 24.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0368797 s, 28.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0404225 s, 25.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0495397 s, 21.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0371127 s, 28.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0425558 s, 24.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0445136 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0458253 s, 22.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0391677 s, 26.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0354527 s, 29.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0362133 s, 29.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0357359 s, 29.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0409773 s, 25.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0530568 s, 19.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0381339 s, 27.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0407801 s, 25.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0395454 s, 26.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0448858 s, 23.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0428414 s, 24.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0429771 s, 24.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0376823 s, 27.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0494715 s, 21.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0562856 s, 18.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0410928 s, 25.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0421298 s, 24.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0443637 s, 23.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0640568 s, 16.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0416667 s, 25.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.044977 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0385606 s, 27.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0405512 s, 25.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0571032 s, 18.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0450489 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0394075 s, 26.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0450586 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0358733 s, 29.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0453837 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0354705 s, 29.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0394327 s, 26.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0515767 s, 20.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0450056 s, 23.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.036327 s, 28.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0447118 s, 23.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0396217 s, 26.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0398659 s, 26.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.053488 s, 19.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.048554 s, 21.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0589507 s, 17.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0471647 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0396727 s, 26.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0381624 s, 27.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0390685 s, 26.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0362472 s, 28.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438654 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0421373 s, 24.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0407377 s, 25.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0638485 s, 16.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0472476 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.40206 s, 2.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0464061 s, 22.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0453475 s, 23.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0592898 s, 17.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.040848 s, 25.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0412693 s, 25.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.039524 s, 26.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0418766 s, 25.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0459631 s, 22.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.053523 s, 19.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0428633 s, 24.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0455416 s, 23.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0626151 s, 16.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0865074 s, 12.1 MB/s umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 272k/0k (141k/132k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 280.27MB/s [Thread 0] Scanned group range [0, 3), inodes 373 Pass 2: Checking directory structure Pass 2: Memory used: 272k/0k (95k/178k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 303.95MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 272k/0k (95k/178k), time: 0.01/ 0.01/ 0.00 Pass 3A: Memory used: 272k/0k (95k/178k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 272k/0k (93k/180k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 4132.23MB/s Pass 4: Checking reference counts Pass 4: Memory used: 272k/0k (67k/206k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 272k/0k (67k/206k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 282.49MB/s 372 inodes used (0.52%, out of 72000) 4 non-contiguous files (1.1%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 22546 blocks used (50.10%, out of 45000) 0 bad blocks 1 large file 244 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 362 files Memory used: 272k/0k (66k/207k), time: 0.02/ 0.01/ 0.00 I/O read: 1MB, write: 1MB, rate: 45.81MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 2) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] Pass 1: Memory used: 264k/0k (132k/133k), time: 0.01/ 0.00/ 0.01 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 590.53MB/s [Thread 0] Scanned group range [0, 2), inodes 398 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (87k/178k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 319.69MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (92k/173k), time: 0.02/ 0.00/ 0.01 Pass 3A: Memory used: 264k/0k (92k/173k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 264k/0k (84k/181k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 5747.13MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (65k/200k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 264k/0k (65k/200k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 355.11MB/s 398 inodes used (0.88%, out of 45008) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 392 37721 blocks used (83.82%, out of 45000) 0 bad blocks 1 large file 216 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 388 files Memory used: 264k/0k (64k/201k), time: 0.02/ 0.01/ 0.01 I/O read: 2MB, write: 1MB, rate: 85.68MB/s oleg228-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/mds1_flakey to 640000 (4k) blocks. The filesystem on /dev/mapper/mds1_flakey is now 640000 (4k) blocks long. oleg228-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/ost1_flakey to 1048576 (4k) blocks. The filesystem on /dev/mapper/ost1_flakey is now 1048576 (4k) blocks long. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 33) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] group 33 finished [Thread 1] Pass 1: Memory used: 632k/0k (380k/253k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 1267.43MB/s [Thread 1] Scanned group range [16, 33), inodes 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 688k/0k (355k/334k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 262.05MB/s [Thread 0] Scanned group range [0, 16), inodes 373 Pass 2: Checking directory structure Pass 2: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 334.34MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 632k/0k (200k/433k), time: 0.03/ 0.03/ 0.00 Pass 3A: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 632k/0k (198k/435k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 5988.02MB/s Pass 4: Checking reference counts Pass 4: Memory used: 632k/0k (72k/561k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 632k/0k (70k/563k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 287.77MB/s 372 inodes used (0.05%, out of 792000) 4 non-contiguous files (1.1%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 202726 blocks used (31.68%, out of 640000) 0 bad blocks 1 large file 244 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 362 files Memory used: 632k/0k (69k/564k), time: 0.06/ 0.05/ 0.00 I/O read: 1MB, write: 1MB, rate: 16.73MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 32) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] group 21 finished [Thread 0] group 22 finished [Thread 0] group 23 finished [Thread 0] group 24 finished [Thread 0] group 25 finished [Thread 0] group 26 finished [Thread 0] group 27 finished [Thread 0] group 28 finished [Thread 0] group 29 finished [Thread 0] group 30 finished [Thread 0] group 31 finished [Thread 0] group 32 finished [Thread 0] Pass 1: Memory used: 468k/0k (344k/125k), time: 0.01/ 0.00/ 0.01 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 677.97MB/s [Thread 0] Scanned group range [0, 32), inodes 398 Pass 2: Checking directory structure Pass 2: Memory used: 680k/0k (298k/383k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 468.60MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 680k/0k (299k/382k), time: 0.02/ 0.01/ 0.01 Pass 3A: Memory used: 680k/0k (299k/382k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 680k/0k (296k/385k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 10989.01MB/s Pass 4: Checking reference counts Pass 4: Memory used: 564k/0k (66k/499k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 564k/0k (65k/500k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 240.15MB/s 398 inodes used (0.06%, out of 720128) 2 non-contiguous files (0.5%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 392 128315 blocks used (12.24%, out of 1048576) 0 bad blocks 1 large file 216 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 388 files Memory used: 564k/0k (64k/501k), time: 0.05/ 0.03/ 0.01 I/O read: 2MB, write: 1MB, rate: 42.15MB/s start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre check files after expanding the MDT and OST filesystems /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has size 1048576 OK create more files after expanding the MDT and OST filesystems 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0331557 s, 31.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0438934 s, 23.9 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0354399 s, 29.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.044039 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0363536 s, 28.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0366203 s, 28.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0330345 s, 31.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0581309 s, 18.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0329893 s, 31.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0397475 s, 26.4 MB/s umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 33) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] group 33 finished [Thread 1] Pass 1: Memory used: 632k/0k (379k/254k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 1466.28MB/s [Thread 1] Scanned group range [16, 33), inodes 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 688k/0k (355k/334k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 291.46MB/s [Thread 0] Scanned group range [0, 16), inodes 383 Pass 2: Checking directory structure Pass 2: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 368.46MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 632k/0k (200k/433k), time: 0.03/ 0.03/ 0.00 Pass 3A: Memory used: 632k/0k (200k/433k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 632k/0k (198k/435k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 11494.25MB/s Pass 4: Checking reference counts Pass 4: Memory used: 632k/0k (72k/561k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 632k/0k (70k/563k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 274.42MB/s 382 inodes used (0.05%, out of 792000) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 202726 blocks used (31.68%, out of 640000) 0 bad blocks 1 large file 254 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 372 files Memory used: 632k/0k (69k/564k), time: 0.06/ 0.05/ 0.00 I/O read: 1MB, write: 1MB, rate: 16.78MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 32) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] group 21 finished [Thread 0] group 22 finished [Thread 0] group 23 finished [Thread 0] group 24 finished [Thread 0] group 25 finished [Thread 0] group 26 finished [Thread 0] group 27 finished [Thread 0] group 28 finished [Thread 0] group 29 finished [Thread 0] group 30 finished [Thread 0] group 31 finished [Thread 0] group 32 finished [Thread 0] Pass 1: Memory used: 472k/0k (345k/128k), time: 0.01/ 0.00/ 0.01 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 679.07MB/s [Thread 0] Scanned group range [0, 32), inodes 402 Pass 2: Checking directory structure Pass 2: Memory used: 684k/0k (299k/386k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 325.84MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 684k/0k (299k/386k), time: 0.03/ 0.01/ 0.01 Pass 3A: Memory used: 684k/0k (299k/386k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 684k/0k (296k/389k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 9900.99MB/s Pass 4: Checking reference counts Pass 4: Memory used: 568k/0k (67k/502k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 568k/0k (65k/504k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 239.06MB/s 402 inodes used (0.06%, out of 720128) 3 non-contiguous files (0.7%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 396 130875 blocks used (12.48%, out of 1048576) 0 bad blocks 1 large file 220 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 392 files Memory used: 568k/0k (64k/505k), time: 0.05/ 0.04/ 0.01 I/O read: 2MB, write: 1MB, rate: 40.34MB/s oleg228-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/mds1_flakey to 377837 (4k) blocks. The filesystem on /dev/mapper/mds1_flakey is now 377837 (4k) blocks long. oleg228-server: resize2fs 1.46.2.wc5 (26-Mar-2022) Resizing the filesystem on /dev/mapper/ost1_flakey to 591846 (4k) blocks. The filesystem on /dev/mapper/ost1_flakey is now 589824 (4k) blocks long. e2fsck -d -v -t -t -f -y /dev/mapper/mds1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 20) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 24033 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 48044 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48045 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48046 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48047 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48048 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48050 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48051 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48052 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48053 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 48054 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] group 19 finished [Thread 0] group 20 finished [Thread 0] Pass 1: Memory used: 400k/0k (270k/131k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 248.02MB/s [Thread 0] Scanned group range [0, 20), inodes 383 Pass 2: Checking directory structure Pass 2: Memory used: 576k/0k (224k/353k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 281.14MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 576k/0k (224k/353k), time: 0.03/ 0.02/ 0.01 Pass 3A: Memory used: 576k/0k (224k/353k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 576k/0k (222k/355k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 5714.29MB/s Pass 4: Checking reference counts Pass 4: Memory used: 500k/0k (69k/432k), time: 0.01/ 0.01/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 500k/0k (67k/434k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 351.25MB/s 382 inodes used (0.08%, out of 480000) 4 non-contiguous files (1.0%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 124660 blocks used (32.99%, out of 377837) 0 bad blocks 1 large file 254 regular files 118 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 372 files Memory used: 500k/0k (66k/435k), time: 0.05/ 0.03/ 0.01 I/O read: 1MB, write: 1MB, rate: 22.12MB/s e2fsck -d -v -t -t -f -y /dev/mapper/ost1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 18) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] group 17 finished [Thread 0] group 18 finished [Thread 0] Pass 1: Memory used: 372k/0k (246k/127k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 5MB, write: 0MB, rate: 641.52MB/s [Thread 0] Scanned group range [0, 18), inodes 402 Pass 2: Checking directory structure Pass 2: Memory used: 532k/0k (200k/333k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 352.61MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 532k/0k (200k/333k), time: 0.02/ 0.01/ 0.01 Pass 3A: Memory used: 532k/0k (200k/333k), time: 0.00/ 0.00/ 0.00 Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 3: Memory used: 532k/0k (197k/336k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 1MB, write: 0MB, rate: 6289.31MB/s Pass 4: Checking reference counts Pass 4: Memory used: 468k/0k (66k/403k), time: 0.01/ 0.01/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 468k/0k (65k/404k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 281.61MB/s 402 inodes used (0.10%, out of 405072) 3 non-contiguous files (0.7%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 396 89417 blocks used (15.16%, out of 589824) 0 bad blocks 1 large file 220 regular files 172 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 392 files Memory used: 468k/0k (64k/405k), time: 0.03/ 0.02/ 0.01 I/O read: 2MB, write: 1MB, rate: 57.75MB/s start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=131072 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre check files after shrinking the MDT and OST filesystems /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-1 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-2 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-3 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-4 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-5 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-6 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-7 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-8 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-9 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-10 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-11 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-12 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-13 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-14 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-15 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-16 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-17 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-18 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-19 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-20 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-21 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-22 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-23 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-24 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-25 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-26 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-27 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-28 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-29 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-30 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-31 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-32 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-33 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-34 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-35 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-36 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-37 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-38 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-39 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-40 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-41 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-42 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-43 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-44 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-45 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-46 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-47 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-48 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-49 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-50 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-51 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-52 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-53 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-54 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-55 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-56 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-57 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-58 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-59 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-60 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-61 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-62 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-63 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-64 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-65 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-66 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-67 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-68 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-69 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-70 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-71 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-72 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-73 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-74 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-75 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-76 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-77 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-78 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-79 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-80 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-81 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-82 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-83 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-84 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-85 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-86 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-87 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-88 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-89 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-90 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-91 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-92 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-93 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-94 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-95 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-96 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-97 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-98 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-99 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-100 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-101 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-101 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-102 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-102 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-103 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-103 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-104 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-104 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-105 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-105 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-106 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-106 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-107 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-107 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-108 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-108 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-109 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-109 has size 1048576 OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-110 has type file OK /mnt/lustre/d78.conf-sanity/f78.conf-sanity-110 has size 1048576 OK umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 78 (170s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 79: format MDT/OST without mgs option (should return errors) ========================================================== 03:34:37 (1713425677) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: oleg228-server: mkfs.lustre FATAL: Must specify --mgs or --mgsnode oleg228-server: mkfs.lustre: exiting with 22 (Invalid argument) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 22 oleg228-server: oleg228-server: mkfs.lustre FATAL: Must specify --mgs or --mgsnode oleg228-server: mkfs.lustre: exiting with 22 (Invalid argument) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 22 oleg228-server: oleg228-server: mkfs.lustre FATAL: Must specify --mgsnode oleg228-server: mkfs.lustre: exiting with 22 (Invalid argument) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 22 Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 3 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 79 (60s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 80: mgc import reconnect race ======== 03:35:39 (1713425739) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid fail_val=10 fail_loc=0x906 fail_val=10 fail_loc=0x906 start ost2 service on oleg228-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid fail_loc=0 stop ost2 service on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 80 (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 81: sparse OST indexing ============== 03:36:53 (1713425813) SKIP: conf-sanity test_81 needs >= 3 OSTs SKIP 81 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 82a: specify OSTs for file (succeed) or directory (succeed) ========================================================== 03:36:56 (1713425816) SKIP: conf-sanity test_82a needs >= 3 OSTs SKIP 82a (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 82b: specify OSTs for file with --pool and --ost-list options ========================================================== 03:36:58 (1713425818) SKIP: conf-sanity test_82b needs >= 4 OSTs SKIP 82b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 83: ENOSPACE on OST doesn't cause message VFS: Busy inodes after unmount ... ========================================================== 03:37:00 (1713425820) mount the OST /dev/mapper/ost1_flakey as a ldiskfs filesystem mnt_opts -o loop run llverfs in partial mode on the OST ldiskfs /mnt/lustre-ost1 oleg228-server: oleg228-server.virtnet: executing run_llverfs /mnt/lustre-ost1 -vpl no oleg228-server: oleg228-server: llverfs: write /mnt/lustre-ost1/llverfs_dir00142/file000@0+1048576 short: 368640 written oleg228-server: Timestamp: 1713425824 oleg228-server: dirs: 147, fs blocks: 37602 oleg228-server: write_done: /mnt/lustre-ost1/llverfs_dir00142/file000, current: 260.306 MB/s, overall: 260.306 MB/s, ETA: 0:00:00 oleg228-server: oleg228-server: read_done: /mnt/lustre-ost1/llverfs_dir00141/file000, current: 2820.71 MB/s, overall: 2820.71 MB/s, ETA: 0:00:00 oleg228-server: unmount the OST /dev/mapper/ost1_flakey Stopping /mnt/lustre-ost1 (opts:) on oleg228-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg228-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: No space left on device pdsh@oleg228-client: oleg228-server: ssh exited with exit code 28 oleg228-server: error: set_param: param_path 'seq/cli-lustre': No such file or directory oleg228-server: error: set_param: setting 'seq/cli-lustre'='OST0000-super.width=65536': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 28 string err Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 83 (58s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 84: check recovery_hard_time ========= 03:37:59 (1713425879) start mds service on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 3 sec start ost2 service on oleg228-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec recovery_time=60, timeout=20, wrap_up=5 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre mount lustre on /mnt/lustre2..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre2 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1668 84924 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1524 126692 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1524 126692 2% /mnt/lustre[OST:1] filesystem_summary: 284432 3048 253384 2% /mnt/lustre total: 1000 open/close in 2.68 seconds: 373.44 ops/second fail_loc=0x20000709 fail_val=5 Failing mds1 on oleg228-server Stopping /mnt/lustre-mds1 (opts:) on oleg228-server 03:38:36 (1713425916) shut down Failover mds1 to oleg228-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Warning: skipping journal recovery because doing a read-only filesystem check. Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 26697 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 53372 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53373 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53374 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53375 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53376 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53377 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53378 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53379 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53380 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53381 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53382 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 264k/0k (140k/125k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 248.51MB/s [Thread 0] Scanned group range [0, 3), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (97k/168k), time: 0.00/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 261.37MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (97k/168k), time: 0.02/ 0.01/ 0.00 Pass 3: Memory used: 264k/0k (96k/169k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (67k/198k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Free blocks count wrong (25455, counted=25443). Fix? no Free inodes count wrong (79719, counted=79715). Fix? no Pass 5: Memory used: 264k/0k (67k/198k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 226.60MB/s 273 inodes used (0.34%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24545 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 150 regular files 117 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 267 files Memory used: 264k/0k (66k/199k), time: 0.02/ 0.01/ 0.01 I/O read: 1MB, write: 0MB, rate: 42.52MB/s mount facets: mds1 Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 03:38:51 (1713425931) targets are mounted 03:38:51 (1713425931) facet_failover done oleg228-client: error: invalid path '/mnt/lustre': Input/output error pdsh@oleg228-client: oleg228-client: ssh exited with exit code 5 recovery status status: COMPLETE recovery_start: 1713425934 recovery_duration: 60 completed_clients: 2/3 replayed_requests: 157 last_transno: 8589934749 VBR: DISABLED IR: DISABLED fail_loc=0 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) umount lustre on /mnt/lustre2..... Stopping client oleg228-client.virtnet /mnt/lustre2 (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop ost2 service on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 84 (142s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 85: osd_ost init: fail ea_fid_set ==== 03:40:23 (1713426023) fail_loc=0x197 start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server PASS 85 (71s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 86: Replacing mkfs.lustre -G option == 03:41:36 (1713426096) oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps params: --mgsnode=oleg228-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-G 1024 -b 4096 -O flex_bg -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Failing mds1 on oleg228-server 03:41:38 (1713426098) shut down Failover mds1 to oleg228-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 03:41:52 (1713426112) targets are mounted 03:41:52 (1713426112) facet_failover done pdsh@oleg228-client: oleg228-client: ssh exited with exit code 95 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid pdsh@oleg228-client: oleg228-client: ssh exited with exit code 95 Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -G 1024 -b 4096 -I 512 -q -O flex_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -G 1024 -b 4096 -I 512 -q -O flex_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 86 (80s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 87: check if MDT inode can hold EAs with N stripes properly ========================================================== 03:42:58 (1713426178) Estimate: at most 353-byte space left in inode. unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 setup single mount lustre success Since only 1 out 2 OSTs are used, the expected left space is changed to 377 bytes at most. 4 -rw-r--r-- 1 root root 67108865 Apr 18 03:43 /mnt/lustre-mds1/ROOT/f87.conf-sanity Verified: at most 377-byte space left in inode. Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server PASS 87 (62s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 88: check the default mount options can be overridden ========================================================== 03:44:02 (1713426242) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Persistent mount opts: user_xattr,errors=remount-ro Persistent mount opts: user_xattr,errors=remount-ro Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=panic Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity device size = 2500MB formatting backing filesystem ldiskfs on /dev/mapper/mds1_flakey target name lustre:MDT0000 kilobytes 200000 options -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -b 4096 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/mds1_flakey 200000k Writing CONFIGS/mountdata Persistent mount opts: user_xattr,errors=panic Persistent mount opts: user_xattr,errors=panic PASS 88 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 89: check tunefs --param and --erase-param{s} options ========================================================== 03:44:15 (1713426255) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) tunefs --param failover.node=192.0.2.254@tcp0 tunefs --param failover.node=192.0.2.255@tcp0 tunefs --erase-param failover.node tunefs --erase-params tunefs --param failover.node=192.0.2.254@tcp0 --erase-params Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL failover.node=192.0.2.254@tcp0,mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL failover.node=192.0.2.254@tcp0,mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) failover.node=192.0.2.254@tcp0,osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 89 (65s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 90a: check max_mod_rpcs_in_flight is enforced ========================================================== 03:45:22 (1713426322) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre max_mod_rcps_in_flight is 7 creating 8 files ... fail_loc=0x159 launch 6 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90a.conf-sanity/file-7 has perms 0600 OK fail_loc=0x159 launch 7 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90a.conf-sanity/file-8 has perms 0644 OK umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 90a (74s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 90b: check max_mod_rpcs_in_flight is enforced after update ========================================================== 03:46:37 (1713426397) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre mdc.lustre-MDT0000-mdc-ffff8800ab3c3000.max_mod_rpcs_in_flight=1 max_mod_rpcs_in_flight set to 1 creating 2 files ... fail_loc=0x159 launch 0 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity1/file-1 has perms 0600 OK fail_loc=0x159 launch 1 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity1/file-2 has perms 0644 OK mdc.lustre-MDT0001-mdc-ffff8800ab3c3000.max_mod_rpcs_in_flight=5 max_mod_rpcs_in_flight set to 5 creating 6 files ... fail_loc=0x159 launch 4 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity2/file-5 has perms 0600 OK fail_loc=0x159 launch 5 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity2/file-6 has perms 0644 OK mdt_max_mod_rpcs_in_flight is 8 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre mdc.lustre-MDT0000-mdc-ffff8800b6d09800.max_rpcs_in_flight=17 mdc.lustre-MDT0000-mdc-ffff8800b6d09800.max_mod_rpcs_in_flight=16 max_mod_rpcs_in_flight set to 16 creating 17 files ... fail_loc=0x159 launch 15 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity3/file-16 has perms 0600 OK fail_loc=0x159 launch 16 chmod in parallel ... fail_loc=0 launch 1 additional chmod in parallel ... /mnt/lustre/d90b.conf-sanity3/file-17 has perms 0644 OK error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 90b (134s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 90c: check max_mod_rpcs_in_flight update limits ========================================================== 03:48:52 (1713426532) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre max_rpcs_in_flight is 8 MDC max_mod_rpcs_in_flight is 7 mdt_max_mod_rpcs_in_flight is 8 error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved mdc.lustre-MDT0000-mdc-ffff8800b6d0c800.max_mod_rpcs_in_flight=8 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre mdc.lustre-MDT0000-mdc-ffff8800a941a800.max_rpcs_in_flight=10 error: set_param: setting /sys/fs/lustre/mdc/lustre-MDT0000-mdc-ffff8800a941a800/max_mod_rpcs_in_flight=9: Numerical result out of range error: set_param: setting 'mdc/lustre-MDT0000-mdc-*/max_mod_rpcs_in_flight'='9': Numerical result out of range Stopping client oleg228-client.virtnet /mnt/lustre (opts:) Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre error: get_param: param_path 'mdt/*/max_mod_rpcs_in_flight': No such file or directory the deprecated max_mod_rpcs_per_client parameter was involved umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 90c (49s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 90d: check one close RPC is allowed above max_mod_rpcs_in_flight ========================================================== 03:49:43 (1713426583) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre max_mod_rcps_in_flight is 7 creating 7 files ... multiop /mnt/lustre/d90d.conf-sanity/file-close vO_c TMPPIPE=/tmp/multiop_open_wait_pipe.7519 fail_loc=0x159 launch 7 chmod in parallel ... fail_loc=0 launch 1 additional close in parallel ... umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 90d (62s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 91: evict-by-nid support ============= 03:50:47 (1713426647) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre setup single mount lustre success list nids on mdt: mdt.lustre-MDT0000.exports.0@lo mdt.lustre-MDT0000.exports.192.168.202.28@tcp mdt.lustre-MDT0000.exports.clear mdt.lustre-MDT0001.exports.0@lo mdt.lustre-MDT0001.exports.192.168.202.28@tcp mdt.lustre-MDT0001.exports.clear uuid from 192\.168\.202\.28@tcp: mdt.lustre-MDT0000.exports.192.168.202.28@tcp.uuid=5de18b85-355b-4115-8f63-090b05fdf12d mdt.lustre-MDT0001.exports.192.168.202.28@tcp.uuid=5de18b85-355b-4115-8f63-090b05fdf12d manual umount lustre on /mnt/lustre.... evict 192\.168\.202\.28@tcp oleg228-server: error: read_param: '/proc/fs/lustre/mdt/lustre-MDT0000/exports/192.168.202.28@tcp/uuid': No such device pdsh@oleg228-client: oleg228-server: ssh exited with exit code 19 oleg228-server: error: read_param: '/proc/fs/lustre/obdfilter/lustre-OST0000/exports/192.168.202.28@tcp/uuid': No such device pdsh@oleg228-client: oleg228-server: ssh exited with exit code 19 oleg228-server: error: read_param: '/proc/fs/lustre/mdt/lustre-MDT0000/exports/192.168.202.28@tcp/uuid': No such device pdsh@oleg228-client: oleg228-server: ssh exited with exit code 19 oleg228-server: error: read_param: '/proc/fs/lustre/obdfilter/lustre-OST0000/exports/192.168.202.28@tcp/uuid': No such device pdsh@oleg228-client: oleg228-server: ssh exited with exit code 19 umount lustre on /mnt/lustre..... stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 91 (79s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 92: ldev returns MGS NID correctly in command substitution ========================================================== 03:52:08 (1713426728) Host is oleg228-client.virtnet ----- /tmp/ldev.conf ----- oleg228-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg228-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg228-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg228-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg228-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg228-server oleg228-server@tcp --- END /tmp/nids --- -- START OF LDEV OUTPUT -- lustre-OST0001: oleg228-server@tcp lustre-MDT0000: oleg228-server@tcp lustre-MGS0000: oleg228-server@tcp lustre-OST0000: oleg228-server@tcp lustre-MDT0001: oleg228-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-MGS0000: oleg228-server@tcp lustre-OST0000: oleg228-server@tcp lustre-OST0001: oleg228-server@tcp lustre-MDT0000: oleg228-server@tcp lustre-MDT0001: oleg228-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-MGS0000: oleg228-server@tcp lustre-OST0001: oleg228-server@tcp lustre-MDT0000: oleg228-server@tcp lustre-OST0000: oleg228-server@tcp lustre-MDT0001: oleg228-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-MGS0000: oleg228-server@tcp lustre-OST0001: oleg228-server@tcp lustre-MDT0000: oleg228-server@tcp lustre-OST0000: oleg228-server@tcp lustre-MDT0001: oleg228-server@tcp --- END OF LDEV OUTPUT --- -- START OF LDEV OUTPUT -- lustre-MGS0000: oleg228-server@tcp lustre-OST0000: oleg228-server@tcp lustre-MDT0001: oleg228-server@tcp lustre-OST0001: oleg228-server@tcp lustre-MDT0000: oleg228-server@tcp --- END OF LDEV OUTPUT --- pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 92 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 93: register mulitple MDT at the same time ========================================================== 03:52:11 (1713426731) SKIP: conf-sanity test_93 needs >= 3 MDTs SKIP 93 (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 94: ldev outputs correct labels for file system name query ========================================================== 03:52:14 (1713426734) ----- /tmp/ldev.conf ----- oleg228-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg228-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg228-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg228-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg228-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg228-server oleg228-server@tcp --- END /tmp/nids --- -- START OF LDEV OUTPUT -- lustre-MDT0000 lustre-MDT0001 lustre-MGS0000 lustre-OST0000 lustre-OST0001 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-MDT0000 lustre-MDT0001 lustre-MGS0000 lustre-OST0000 lustre-OST0001 --- END OF EXPECTED OUTPUT --- pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 94 (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 95: ldev should only allow one label filter ========================================================== 03:52:17 (1713426737) ----- /tmp/ldev.conf ----- oleg228-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg228-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg228-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg228-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg228-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg228-server oleg228-server@tcp --- END /tmp/nids --- pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 95 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 96: ldev returns hostname and backend fs correctly in command sub ========================================================== 03:52:21 (1713426741) ----- /tmp/ldev.conf ----- oleg228-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg228-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg228-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg228-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg228-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg228-server oleg228-server@tcp --- END /tmp/nids --- -- START OF LDEV OUTPUT -- oleg228-server-ldiskfs oleg228-server-ldiskfs oleg228-server-ldiskfs oleg228-server-ldiskfs oleg228-server-ldiskfs --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- oleg228-server-ldiskfs oleg228-server-ldiskfs oleg228-server-ldiskfs oleg228-server-ldiskfs oleg228-server-ldiskfs --- END OF EXPECTED OUTPUT --- pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 96 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 97: ldev returns correct ouput when querying based on role ========================================================== 03:52:24 (1713426744) ----- /tmp/ldev.conf ----- oleg228-server - lustre-MGS0000 /dev/mapper/mds1_flakey oleg228-server - lustre-OST0000 /dev/mapper/ost1_flakey oleg228-server - lustre-OST0001 /dev/mapper/ost2_flakey oleg228-server - lustre-MDT0000 /dev/mapper/mds1_flakey oleg228-server - lustre-MDT0001 /dev/mapper/mds2_flakey --- END /tmp/ldev.conf --- ----- /tmp/nids ----- oleg228-server oleg228-server@tcp --- END /tmp/nids --- MDT role -- START OF LDEV OUTPUT -- lustre-MDT0000 lustre-MDT0001 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-MDT0000 lustre-MDT0001 --- END OF EXPECTED OUTPUT --- OST role -- START OF LDEV OUTPUT -- lustre-OST0000 lustre-OST0001 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-OST0000 lustre-OST0001 --- END OF EXPECTED OUTPUT --- MGS role -- START OF LDEV OUTPUT -- lustre-MGS0000 --- END OF LDEV OUTPUT --- -- START OF EXPECTED OUTPUT -- lustre-MGS0000 --- END OF EXPECTED OUTPUT --- pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 97 (2s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 98: Buffer-overflow check while parsing mount_opts ========================================================== 03:52:28 (1713426748) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre setup single mount lustre success error: mount options too long umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 98 (39s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 99: Adding meta_bg option ============ 03:53:08 (1713426788) oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps params: --mgsnode=oleg228-server@tcp --fsname=lustre --ost --index=0 --param=sys.timeout=20 --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O ^resize_inode,meta_bg -b 4096 -E lazy_itable_init\" --reformat /dev/mapper/ost1_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Permanent disk data: Target: lustre:OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 device size = 4096MB formatting backing filesystem ldiskfs on /dev/mapper/ost1_flakey target name lustre:OST0000 kilobytes 200000 options -b 4096 -I 512 -q -O ^resize_inode,meta_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0000 -b 4096 -I 512 -q -O ^resize_inode,meta_bg,uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,lazy_journal_init,packed_meta_blocks -F /dev/mapper/ost1_flakey 200000k Writing CONFIGS/mountdata oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/ost1_flakey: catastrophic mode - not reading inode or group bitmaps Filesystem features: has_journal ext_attr dir_index filetype meta_bg extent flex_bg large_dir sparse_super large_file huge_file uninit_bg dir_nlink quota project PASS 99 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 100: check lshowmount lists MGS, MDT, OST and 0@lo ========================================================== 03:53:20 (1713426800) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre setup single mount lustre success lustre-MDT0000: lustre-OST0000: umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 100 (56s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 101a: Race MDT->OST reconnection with create ========================================================== 03:54:18 (1713426858) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre seq.cli-lustre-OST0000-super.width=0x1ffffff - open/close 965 (time 1713426896.06 total 10.74 last 89.81) - open/close 1784 (time 1713426906.55 total 21.23 last 78.13) - open/close 2709 (time 1713426917.24 total 31.92 last 86.53) - open/close 3564 (time 1713426927.77 total 42.45 last 81.20) - open/close 4384 (time 1713426938.24 total 52.92 last 78.31) - open/close 5272 (time 1713426948.88 total 63.56 last 83.45) - open/close 6087 (time 1713426959.32 total 74.00 last 78.05) - open/close 6930 (time 1713426969.89 total 84.57 last 79.75) - open/close 7789 (time 1713426980.46 total 95.14 last 81.29) - open/close 8624 (time 1713426990.99 total 105.67 last 79.28) - open/close 9483 (time 1713427001.55 total 116.23 last 81.34) - open/close 10000 (time 1713427007.48 total 122.16 last 87.26) - open/close 10770 (time 1713427017.87 total 132.55 last 74.10) - open/close 11614 (time 1713427028.37 total 143.05 last 80.38) - open/close 12518 (time 1713427039.09 total 153.77 last 84.32) - open/close 13354 (time 1713427049.55 total 164.23 last 79.89) - open/close 14035 (time 1713427059.80 total 174.48 last 66.45) - open/close 14882 (time 1713427070.37 total 185.05 last 80.13) - open/close 15429 (time 1713427080.37 total 195.05 last 54.69) - open/close 16067 (time 1713427090.75 total 205.43 last 61.48) - open/close 16867 (time 1713427101.25 total 215.93 last 76.22) - open/close 17741 (time 1713427111.89 total 226.57 last 82.12) - open/close 18478 (time 1713427122.29 total 236.97 last 70.83) - open/close 20000 (time 1713427125.00 total 239.68 last 563.28) - open/close 25656 (time 1713427135.00 total 249.68 last 565.59) - open/close 30000 (time 1713427143.18 total 257.86 last 530.58) - open/close 35318 (time 1713427153.18 total 267.87 last 531.74) - open/close 40000 (time 1713427161.92 total 276.60 last 536.11) - open/close 45179 (time 1713427171.92 total 286.60 last 517.85) open(/mnt/lustre/d101a.conf-sanity/f101a.conf-sanity-49632) error: No space left on device total: 49632 open/close in 294.92 seconds: 168.29 ops/second - unlinked 0 (time 1713427181 ; total 0 ; last 0) - unlinked 10000 (time 1713427192 ; total 11 ; last 11) - unlinked 20000 (time 1713427203 ; total 22 ; last 11) - unlinked 30000 (time 1713427214 ; total 33 ; last 11) - unlinked 40000 (time 1713427225 ; total 44 ; last 11) unlink(/mnt/lustre/d101a.conf-sanity/f101a.conf-sanity-49632) error: No such file or directory total: 49632 unlinks in 54 seconds: 919.111084 unlinks/second umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 101a (397s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 101b: Race events DISCONNECT and ACTIVE in osp ========================================================== 04:00:56 (1713427256) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre fail_loc=0x80002107 fail_val=20 stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 13 sec oleg228-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8800b6d09000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800b6d09000.ost_server_uuid in FULL state after 0 sec umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 101b (94s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory SKIP: conf-sanity test_102 skipping excluded test 102 error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 103: rename filesystem name ========== 04:02:32 (1713427352) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800adde5800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800adde5800.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 6s: want 'procname_uid' got 'procname_uid' disable quota as required oleg228-server: Pool lustre.pool1 created oleg228-server: Pool lustre.lustre created oleg228-server: OST lustre-OST0000_UUID added to pool lustre.lustre Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server rename lustre to mylustre checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: mylustre-MDT0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'mylustre-MDT0000' '/dev/mapper/mds1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: mylustre-MDT0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'mylustre-MDT0001' '/dev/mapper/mds2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: mylustre-OST0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 cmd: tune2fs -f -L 'mylustre-OST0000' '/dev/mapper/ost1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: mylustre-OST0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 cmd: tune2fs -f -L 'mylustre-OST0001' '/dev/mapper/ost2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started mylustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started mylustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-mylustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started mylustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-mylustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started mylustre-OST0001 mount mylustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/mylustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/mylustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/mylustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.mylustre-OST0000-osc-ffff8800a8fb2800.idle_timeout=debug osc.mylustre-OST0001-osc-ffff8800a8fb2800.idle_timeout=debug disable quota as required File: '/mnt/lustre/d103.conf-sanity/test-framework.sh' Size: 291280 Blocks: 576 IO Block: 4194304 regular file Device: c3aa56ceh/3282720462d Inode: 162129704445280258 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 04:03:17.000000000 -0400 Modify: 2024-04-18 04:03:17.000000000 -0400 Change: 2024-04-18 04:03:17.000000000 -0400 Birth: - Pool: mylustre.pool1 Pool: mylustre.lustre mylustre-OST0000_UUID mylustre-OST0000_UUID oleg228-server: OST mylustre-OST0001_UUID added to pool mylustre.lustre Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server rename mylustre to tfs checking for existing Lustre data: found Read previous values: Target: mylustre-MDT0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: tfs-MDT0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'tfs-MDT0000' '/dev/mapper/mds1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: mylustre-MDT0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: tfs-MDT0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'tfs-MDT0001' '/dev/mapper/mds2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: mylustre-OST0000 Index: 0 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: tfs-OST0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 cmd: tune2fs -f -L 'tfs-OST0000' '/dev/mapper/ost1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: mylustre-OST0001 Index: 1 Lustre FS: mylustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: tfs-OST0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 cmd: tune2fs -f -L 'tfs-OST0001' '/dev/mapper/ost2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started tfs-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started tfs-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-tfs-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started tfs-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-tfs-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started tfs-OST0001 mount tfs on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/tfs /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/tfs /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/tfs on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.tfs-OST0000-osc-ffff8800a8fb0000.idle_timeout=debug osc.tfs-OST0001-osc-ffff8800a8fb0000.idle_timeout=debug disable quota as required File: '/mnt/lustre/d103.conf-sanity/test-framework.sh' Size: 291280 Blocks: 576 IO Block: 4194304 regular file Device: 32e2fa5ah/853736026d Inode: 162129704445280258 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 04:03:17.000000000 -0400 Modify: 2024-04-18 04:03:17.000000000 -0400 Change: 2024-04-18 04:03:17.000000000 -0400 Birth: - Pool: tfs.pool1 Pool: tfs.lustre tfs-OST0000_UUID tfs-OST0001_UUID tfs-OST0000_UUID Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server rename tfs to lustre checking for existing Lustre data: found Read previous values: Target: tfs-MDT0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'lustre-MDT0000' '/dev/mapper/mds1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: tfs-MDT0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity cmd: tune2fs -f -L 'lustre-MDT0001' '/dev/mapper/mds2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: tfs-OST0000 Index: 0 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 cmd: tune2fs -f -L 'lustre-OST0000' '/dev/mapper/ost1_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata checking for existing Lustre data: found Read previous values: Target: tfs-OST0001 Index: 1 Lustre FS: tfs Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 cmd: tune2fs -f -L 'lustre-OST0001' '/dev/mapper/ost2_flakey' >/dev/null 2>&1 Writing CONFIGS/mountdata Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800adde4000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800adde4000.idle_timeout=debug disable quota as required PASS 103 (243s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 104a: Make sure user defined options are reflected in mount ========================================================== 04:06:37 (1713427597) mountfsopt: acl,user_xattr Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg228-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Starting mds1: -o localrecov,noacl /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 Starting mds2: -o localrecov,noacl /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre setfacl: /mnt/lustre: Operation not supported PASS 104a (68s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 104b: Mount uses last flock argument ========================================================== 04:07:46 (1713427666) mount lustre with opts flock,localflock on /mnt/lustre3..... Starting client: oleg228-client.virtnet: -o flock,localflock oleg228-server@tcp:/lustre /mnt/lustre3 192.168.202.128@tcp:/lustre on /mnt/lustre3 type lustre (rw,checksum,localflock,nouser_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) umount lustre on /mnt/lustre3..... Stopping client oleg228-client.virtnet /mnt/lustre3 (opts:) mount lustre with opts localflock,flock on /mnt/lustre3..... Starting client: oleg228-client.virtnet: -o localflock,flock oleg228-server@tcp:/lustre /mnt/lustre3 192.168.202.128@tcp:/lustre on /mnt/lustre3 type lustre (rw,checksum,flock,nouser_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) umount lustre on /mnt/lustre3..... Stopping client oleg228-client.virtnet /mnt/lustre3 (opts:) mount lustre with opts localflock,flock,noflock on /mnt/lustre3..... Starting client: oleg228-client.virtnet: -o localflock,flock,noflock oleg228-server@tcp:/lustre /mnt/lustre3 umount lustre on /mnt/lustre3..... Stopping client oleg228-client.virtnet /mnt/lustre3 (opts:) PASS 104b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 105: check file creation for ro and rw bind mnt pt ========================================================== 04:07:51 (1713427671) umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:-f) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local oleg228-server: rmmod: ERROR: Module lustre is in use pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 modules unloaded. Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre touch: cannot touch '/tmp/d105.conf-sanity/f105.conf-sanity': Read-only file system umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 105 (96s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory SKIP: conf-sanity test_106 skipping SLOW test 106 error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 107: Unknown config param should not fail target mounting ========================================================== 04:09:29 (1713427769) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid umount lustre on /mnt/lustre..... stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server stop mds service on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 107 (145s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 108a: migrate from ldiskfs to ZFS ==== 04:11:56 (1713427916) SKIP: conf-sanity test_108a zfs only test SKIP 108a (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 108b: migrate from ZFS to ldiskfs ==== 04:11:59 (1713427919) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' oleg228-server: 1+0 records in oleg228-server: 1+0 records out oleg228-server: 1048576 bytes (1.0 MB) copied, 0.00262466 s, 400 MB/s oleg228-server: 1+0 records in oleg228-server: 1+0 records out oleg228-server: 1048576 bytes (1.0 MB) copied, 0.00469142 s, 224 MB/s oleg228-server: 1+0 records in oleg228-server: 1+0 records out oleg228-server: 1048576 bytes (1.0 MB) copied, 0.00429389 s, 244 MB/s oleg228-server: 1+0 records in oleg228-server: 1+0 records out oleg228-server: 1048576 bytes (1.0 MB) copied, 0.0045962 s, 228 MB/s Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x45 (MDT MGS update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-MDT0000 kilobytes 200000 options -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-MDT0000 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x41 (MDT update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-MDT0001 kilobytes 200000 options -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-MDT0001 -I 1024 -i 2560 -q -O uninit_bg,^extents,dirdata,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-OST0000 kilobytes 200000 options -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0000 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata Permanent disk data: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x42 (OST update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp formatting backing filesystem ldiskfs on /dev/loop0 target name lustre-OST0001 kilobytes 200000 options -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre-OST0001 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init,lazy_itable_init="0",packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata changing server nid... mounting mdt1 from backup... mounting mdt2 from backup... mounting ost1 from backup... mounting ost2 from backup... Started LFSCK on the device lustre-MDT0000: scrub Started LFSCK on the device lustre-MDT0001: scrub Started LFSCK on the device lustre-OST0000: scrub Started LFSCK on the device lustre-OST0001: scrub mounting client... check list total 12 drwxr-xr-x 2 root root 4096 Jan 20 2018 d1 -rw-r--r-- 1 root root 0 Jan 20 2018 f0 -rw-r--r-- 1 root root 4067 Jan 20 2018 README -rw-r--r-- 1 root root 331 Jan 20 2018 regression check truncate && write check create check read && write && append verify data done. cleanup... PASS 108b (75s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 109a: test lctl clear_conf fsname ==== 04:13:16 (1713427996) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Setting lustre-MDT0000.mdd.atime_diff from 60 to 62 Waiting 90s for '62' Updated after 3s: want '62' got '62' Setting lustre-MDT0000.mdd.atime_diff from 62 to 63 Waiting 90s for '63' Updated after 5s: want '63' got '63' Setting lustre.llite.max_read_ahead_mb from 256 to 32 Waiting 90s for '32' Setting lustre.llite.max_read_ahead_mb from 32 to 64 Waiting 90s for '64' Updated after 7s: want '64' got '64' oleg228-server: Pool lustre.pool1 created oleg228-server: OST lustre-OST0000_UUID added to pool lustre.pool1 oleg228-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg228-server: OST lustre-OST0000_UUID added to pool lustre.pool1 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov -o nosvc /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all Start /dev/mapper/mds1_flakey without service Started lustre-MDT0000 oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Destroy the created pools: pool1 lustre.pool1 oleg228-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg228-server: Pool lustre.pool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 109a (162s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 109b: test lctl clear_conf one config ========================================================== 04:16:00 (1713428160) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Setting lustre-MDT0000.mdd.atime_diff from 60 to 62 Waiting 90s for '62' Updated after 4s: want '62' got '62' Setting lustre-MDT0000.mdd.atime_diff from 62 to 63 Waiting 90s for '63' Updated after 8s: want '63' got '63' Setting lustre.llite.max_read_ahead_mb from 256 to 32 Waiting 90s for '32' Updated after 2s: want '32' got '32' Setting lustre.llite.max_read_ahead_mb from 32 to 64 Waiting 90s for '64' Updated after 8s: want '64' got '64' oleg228-server: Pool lustre.pool1 created oleg228-server: OST lustre-OST0000_UUID added to pool lustre.pool1 oleg228-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg228-server: OST lustre-OST0000_UUID added to pool lustre.pool1 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov -o nosvc /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all Start /dev/mapper/mds1_flakey without service Started lustre-MDT0000 oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg228-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Destroy the created pools: pool1 lustre.pool1 oleg228-server: OST lustre-OST0000_UUID removed from pool lustre.pool1 oleg228-server: Pool lustre.pool1 destroyed umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 109b (194s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory SKIP: conf-sanity test_110 skipping ALWAYS excluded test 110 SKIP: conf-sanity test_111 skipping SLOW test 111 error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 112a: mount OST with no_create option ========================================================== 04:19:17 (1713428357) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid start ost2 service on oleg228-server Starting ost2: -o localrecov,no_create /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff88012dbb9000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012dbb9000.ost_server_uuid in FULL state after 0 sec oleg228-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0001-osc-ffff88012dbb9000.ost_server_uuid 50 osc.lustre-OST0001-osc-ffff88012dbb9000.ost_server_uuid in FULL state after 0 sec /mnt/lustre/f112a.conf-sanity.1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 67 0x43 0x280000401 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1704 84888 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1540 85052 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1528 126688 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1396 126820 2% /mnt/lustre[OST:1] N filesystem_summary: 284432 2924 253508 2% /mnt/lustre obdfilter.lustre-OST0001.no_create=0 stop ost2 service on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 112a (72s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 112b: mount MDT with no_create option ========================================================== 04:20:30 (1713428430) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start mds service on oleg228-server Starting mds2: -o localrecov -o no_create /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid start ost2 service on oleg228-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre oleg228-server: oleg228-server.virtnet: executing wait_import_state (FULL|IDLE) os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1704 84888 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1544 85048 2% /mnt/lustre[MDT:1] N lustre-OST0000_UUID 142216 1532 126684 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1532 126684 2% /mnt/lustre[OST:1] filesystem_summary: 284432 3064 253368 2% /mnt/lustre 100 0 mdt.lustre-MDT0001.no_create=0 1 0 99 1 stop ost2 service on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 112b (137s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 113: Shadow mountpoint correctly report ro/rw for mounts ========================================================== 04:22:49 (1713428569) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800ac13b800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800ac13b800.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 3s: want 'procname_uid' got 'procname_uid' disable quota as required /dev/mapper/mds1_flakey on /mnt/lustre-mds1 type lustre (rw,svname=lustre-MDT0000,mgs,osd=osd-ldiskfs,user_xattr,errors=remount-ro) /dev/mapper/mds2_flakey on /mnt/lustre-mds2 type lustre (rw,svname=lustre-MDT0001,mgsnode=192.168.202.128@tcp,osd=osd-ldiskfs) /dev/mapper/ost1_flakey on /mnt/lustre-ost1 type lustre (rw,svname=lustre-OST0000,mgsnode=192.168.202.128@tcp,osd=osd-ldiskfs) /dev/mapper/ost2_flakey on /mnt/lustre-ost2 type lustre (rw,svname=lustre-OST0001,mgsnode=192.168.202.128@tcp,osd=osd-ldiskfs) /dev/mapper/ost1_flakey on /mnt/lustre-ost1 type lustre (rw,svname=lustre-OST0000,mgsnode=192.168.202.128@tcp,osd=osd-ldiskfs) /dev/mapper/ost2_flakey on /mnt/lustre-ost2 type lustre (rw,svname=lustre-OST0001,mgsnode=192.168.202.128@tcp,osd=osd-ldiskfs) Shadow Mountpoint correctly reports rw for ldiskfs Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012ffd2000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012ffd2000.idle_timeout=debug disable quota as required Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server PASS 113 (147s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: conf-sanity test_114 skipping SLOW test 114 SKIP: conf-sanity test_115 skipping excluded test 115 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 116: big size MDT support ============ 04:25:19 (1713428719) /usr/sbin/mkfs.xfs Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions meta-data=/tmp/f116.conf-sanity-mdt0 isize=512 agcount=4, agsize=67108864 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=268435456, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=131072, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Permanent disk data: Target: lustre:MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x61 (MDT first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: not found formatting backing filesystem ldiskfs on /dev/loop1 target name lustre:MDT0000 kilobytes 18253611008 options -i 16777216 -b 4096 -J size=4096 -I 1024 -q -O uninit_bg,extents,dirdata,dir_nlink,quota,project,huge_file,64bit,^resize_inode,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,lazy_itable_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000 -i 16777216 -b 4096 -J size=4096 -I 1024 -q -O uninit_bg,extents,dirdata,dir_nlink,quota,project,huge_file,64bit,^resize_inode,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_itable_init,lazy_journal_init,lazy_itable_init,packed_meta_blocks -F /dev/loop1 18253611008k Writing CONFIGS/mountdata Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 116 (61s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 117: lctl get_param return errors properly ========================================================== 04:26:21 (1713428781) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre ost.OSS.ost_io.nrs_policies=fifo oleg228-server: error: read_param: '/sys/kernel/debug/lustre/ost/OSS/ost_io/nrs_tbf_rule': No such device pdsh@oleg228-client: oleg228-server: ssh exited with exit code 19 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 117 (37s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 119: writeconf on slave mdt shouldn't duplicate mdc/osp and crash ========================================================== 04:27:00 (1713428820) oleg228-server: error: get_param: param_path 'debug': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 oleg228-server: error: set_param: param_path 'debug': No such file or directory oleg228-server: error: set_param: setting 'debug'='+config': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: opening /dev/lnet failed: No such file or directory oleg228-server: hint: the kernel modules may not be loaded oleg228-server: IOC_LIBCFS_CLEAR_DEBUG failed: No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server debug_mb=84 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 300s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 280s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 270s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 260s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 250s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 230s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 220s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 210s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 200s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 180s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 160s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 150s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 140s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 130s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 120s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 110s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 100s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 90s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 80s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 70s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 60s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 50s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 0s for '1' Update not seen after 300s: want '1' got '0' stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server debug_mb=84 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 300s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 250s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 240s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 230s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 220s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 210s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 200s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 190s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 180s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 170s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 160s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 130s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 110s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 100s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 90s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 80s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 70s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 60s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 50s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 40s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 30s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 20s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 10s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 0s for '1' Update not seen after 300s: want '1' got '0' stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server debug_mb=84 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 300s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 280s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 270s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 260s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 250s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 240s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 220s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 200s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 190s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 180s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 170s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 160s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 150s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 130s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 120s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 100s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 90s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 70s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 60s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 50s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 40s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 30s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 10s for '1' pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Waiting 0s for '1' Update not seen after 300s: want '1' got '0' debug_mb=21 debug_mb=21 debug=-config Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg228-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey PASS 119 (1004s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 120: cross-target rename should not create bad symlinks ========================================================== 04:43:46 (1713429826) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg228-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg228-server: Use max possible thread num: 1 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 166 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 167 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 168 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 26718 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26720 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26721 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26722 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26723 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26724 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 53337 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 268k/0k (141k/128k), time: 0.00/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 260.89MB/s [Thread 0] Scanned group range [0, 3), inodes 280 Pass 2: Checking directory structure Pass 2: Memory used: 268k/0k (98k/171k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 142.69MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 268k/0k (98k/171k), time: 0.02/ 0.01/ 0.01 Pass 3: Memory used: 268k/0k (96k/173k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 268k/0k (67k/201k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 268k/0k (67k/202k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 313.97MB/s 279 inodes used (0.35%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24583 blocks used (49.17%, out of 50000) 0 bad blocks 1 large file 149 regular files 119 directories 0 character device files 0 block device files 0 fifos 0 links 1 symbolic link (1 fast symbolic link) 0 sockets ------------ 269 files Memory used: 268k/0k (66k/203k), time: 0.03/ 0.01/ 0.01 I/O read: 1MB, write: 0MB, rate: 37.57MB/s PASS 120 (43s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 121: failover MGS ==================== 04:44:31 (1713429871) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Failing mgs on oleg228-server Stopping /mnt/lustre-mds1 (opts:) on oleg228-server 04:44:45 (1713429885) shut down Failover mgs to oleg228-server mount facets: mgs Starting mgs: /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 04:44:59 (1713429899) targets are mounted 04:44:59 (1713429899) facet_failover done pdsh@oleg228-client: oleg228-client: ssh exited with exit code 95 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mgc.*.mgs_server_uuid pdsh@oleg228-client: oleg228-client: ssh exited with exit code 95 stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Failing mgs on oleg228-server Stopping /mnt/lustre-mds1 (opts:) on oleg228-server 04:45:21 (1713429921) shut down Failover mgs to oleg228-server mount facets: mgs Starting mgs: /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 04:45:34 (1713429934) targets are mounted 04:45:34 (1713429934) facet_failover done pdsh@oleg228-client: oleg228-client: ssh exited with exit code 95 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mgc.*.mgs_server_uuid pdsh@oleg228-client: oleg228-client: ssh exited with exit code 95 stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 121 (77s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 122a: Check OST sequence update ====== 04:45:49 (1713429949) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory fail_loc=0x00001e0 start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre fail_loc=0 total: 1000 open/close in 3.67 seconds: 272.29 ops/second umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 122a (66s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 123aa: llog_print works with FIDs and simple names ========================================================== 04:46:57 (1713430017) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre 1 UP mgs MGS MGS 7 - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 13, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 14, event: attach, device: lustre-MDT0000-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 15, event: setup, device: lustre-MDT0000-mdc, UUID: lustre-MDT0000_UUID, node: 192.168.202.128@tcp } - { index: 16, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0000_UUID, index: 0, gen: 1, UUID: lustre-MDT0000-mdc_UUID } - { index: 22, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 23, event: attach, device: lustre-MDT0001-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 24, event: setup, device: lustre-MDT0001-mdc, UUID: lustre-MDT0001_UUID, node: 192.168.202.128@tcp } - { index: 25, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0001_UUID, index: 1, gen: 1, UUID: lustre-MDT0001-mdc_UUID } - { index: 31, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID } - { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.202.128@tcp } - { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 } - { index: 37, event: set_timeout, num: 0x000014, parameter: sys.timeout=20 } PASS 123aa (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ab: llog_print params output values from set_param -P ========================================================== 04:47:31 (1713430051) PASS 123ab (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ac: llog_print with --start and --end ========================================================== 04:47:36 (1713430056) PASS 123ac (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ad: llog_print shows all records == 04:47:40 (1713430060) PASS 123ad (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ae: llog_cancel can cancel requested record ========================================================== 04:47:45 (1713430065) - { index: 11, event: set_param, device: general, parameter: osc.*.max_dirty_mb, value: 467 } - { index: 46, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_dirty_mb=467 } - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 13, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 14, event: attach, device: lustre-MDT0000-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 15, event: setup, device: lustre-MDT0000-mdc, UUID: lustre-MDT0000_UUID, node: 192.168.202.128@tcp } - { index: 16, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0000_UUID, index: 0, gen: 1, UUID: lustre-MDT0000-mdc_UUID } - { index: 22, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 23, event: attach, device: lustre-MDT0001-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 24, event: setup, device: lustre-MDT0001-mdc, UUID: lustre-MDT0001_UUID, node: 192.168.202.128@tcp } - { index: 25, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0001_UUID, index: 1, gen: 1, UUID: lustre-MDT0001-mdc_UUID } - { index: 31, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID } - { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.202.128@tcp } - { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 } - { index: 37, event: set_timeout, num: 0x000014, parameter: sys.timeout=20 } - { index: 43, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_pages_per_rpc=1024 } - { index: 46, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_dirty_mb=467 } - { index: 2, event: attach, device: lustre-clilov, type: lov, UUID: lustre-clilov_UUID } - { index: 3, event: setup, device: lustre-clilov, UUID: } - { index: 6, event: attach, device: lustre-clilmv, type: lmv, UUID: lustre-clilmv_UUID } - { index: 7, event: setup, device: lustre-clilmv, UUID: } - { index: 10, event: new_profile, name: lustre-client, lov: lustre-clilov, lmv: lustre-clilmv } - { index: 13, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 14, event: attach, device: lustre-MDT0000-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 15, event: setup, device: lustre-MDT0000-mdc, UUID: lustre-MDT0000_UUID, node: 192.168.202.128@tcp } - { index: 16, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0000_UUID, index: 0, gen: 1, UUID: lustre-MDT0000-mdc_UUID } - { index: 22, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 23, event: attach, device: lustre-MDT0001-mdc, type: mdc, UUID: lustre-clilmv_UUID } - { index: 24, event: setup, device: lustre-MDT0001-mdc, UUID: lustre-MDT0001_UUID, node: 192.168.202.128@tcp } - { index: 25, event: add_mdc, device: lustre-clilmv, mdt: lustre-MDT0001_UUID, index: 1, gen: 1, UUID: lustre-MDT0001-mdc_UUID } - { index: 31, event: add_uuid, nid: 192.168.202.128@tcp(0x20000c0a8ca80), node: 192.168.202.128@tcp } - { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID } - { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.202.128@tcp } - { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 } - { index: 37, event: set_timeout, num: 0x000014, parameter: sys.timeout=20 } - { index: 43, event: conf_param, device: lustre-OST0000-osc, parameter: osc.max_pages_per_rpc=1024 } PASS 123ae (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123af: llog_catlist can show all config files correctly ========================================================== 04:47:54 (1713430074) lctl --device MGS llog_catlist ... orig_clist: lustre-MDT0000 lustre-OST0000 lustre-MDT0001 lustre-client fail_loc=0x131b fail_val=2 new_clist: lustre-OST0000 lustre-MDT0001 lustre-client fail_loc=0 done lctl --device lustre-MDT0000 llog_catlist ... orig_clist: [0x1:0x2:0x0] fail_loc=0x131b fail_val=2 new_clist: fail_loc=0 done fail_loc=0 PASS 123af (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ag: llog_print skips values deleted by set_param -P -d ========================================================== 04:48:00 (1713430080) PASS 123ag (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ah: del_ost cancels config log entries correctly ========================================================== 04:48:06 (1713430086) del_ost: dry run for target lustre-OST0000 config_log: lustre-MDT0000 [DRY RUN] cancel catalog 'lustre-MDT0000:41':"- { index: 41, event: conf_param, device: lustre-OST0000-osc-MDT0000, parameter: osc.max_dirty_mb=467 }" [DRY RUN] cancel catalog 'lustre-MDT0000:29':"- { index: 29, event: add_osc, device: lustre-MDT0000-mdtlov, ost: lustre-OST0000_UUID, index: 0, gen: 1 }" [DRY RUN] cancel catalog 'lustre-MDT0000:28':"- { index: 28, event: setup, device: lustre-OST0000-osc-MDT0000, UUID: lustre-OST0000_UUID, node: 192.168.202.128@tcp }" [DRY RUN] cancel catalog 'lustre-MDT0000:27':"- { index: 27, event: attach, device: lustre-OST0000-osc-MDT0000, type: osc, UUID: lustre-MDT0000-mdtlov_UUID }" del_ost: no catalog entry deleted config_log: lustre-MDT0001 [DRY RUN] cancel catalog 'lustre-MDT0001:38':"- { index: 38, event: conf_param, device: lustre-OST0000-osc-MDT0001, parameter: osc.max_dirty_mb=467 }" [DRY RUN] cancel catalog 'lustre-MDT0001:26':"- { index: 26, event: add_osc, device: lustre-MDT0001-mdtlov, ost: lustre-OST0000_UUID, index: 0, gen: 1 }" [DRY RUN] cancel catalog 'lustre-MDT0001:25':"- { index: 25, event: setup, device: lustre-OST0000-osc-MDT0001, UUID: lustre-OST0000_UUID, node: 192.168.202.128@tcp }" [DRY RUN] cancel catalog 'lustre-MDT0001:24':"- { index: 24, event: attach, device: lustre-OST0000-osc-MDT0001, type: osc, UUID: lustre-MDT0001-mdtlov_UUID }" del_ost: no catalog entry deleted config_log: lustre-client [DRY RUN] cancel catalog 'lustre-client:34':"- { index: 34, event: add_osc, device: lustre-clilov, ost: lustre-OST0000_UUID, index: 0, gen: 1 }" [DRY RUN] cancel catalog 'lustre-client:33':"- { index: 33, event: setup, device: lustre-OST0000-osc, UUID: lustre-OST0000_UUID, node: 192.168.202.128@tcp }" [DRY RUN] cancel catalog 'lustre-client:32':"- { index: 32, event: attach, device: lustre-OST0000-osc, type: osc, UUID: lustre-clilov_UUID }" del_ost: no catalog entry deleted config_log: lustre-MDT0000 cancel catalog lustre-MDT0000 log_idx 41: done cancel catalog lustre-MDT0000 log_idx 29: done cancel catalog lustre-MDT0000 log_idx 28: done cancel catalog lustre-MDT0000 log_idx 27: done del_ost: cancelled 4 catalog entries config_log: lustre-MDT0001 cancel catalog lustre-MDT0001 log_idx 38: done cancel catalog lustre-MDT0001 log_idx 26: done cancel catalog lustre-MDT0001 log_idx 25: done cancel catalog lustre-MDT0001 log_idx 24: done del_ost: cancelled 4 catalog entries config_log: lustre-client cancel catalog lustre-client log_idx 34: done cancel catalog lustre-client log_idx 33: done cancel catalog lustre-client log_idx 32: done del_ost: cancelled 3 catalog entries umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server fail_loc=0 PASS 123ah (77s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123ai: llog_print display all non skipped records ========================================================== 04:49:25 (1713430165) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre oleg228-server: params: OBD_IOC_LLOG_PRINT failed: No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 - { index: 394, event: set_param, device: general, parameter: timeout, value: 129 } cleanup test 123ai timeout=20 timeout=20 PASS 123ai (58s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 123F: clear and reset all parameters using set_param -F ========================================================== 04:50:25 (1713430225) oleg228-server: rm: cannot remove '/tmp/f123F.conf-sanity.yaml': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Unmounting FS Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Writeconf checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Remounting start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Setting configuration parameters This option left for backward compatibility, please use 'lctl apply_yaml' instead set_param: mdt.lustre-MDT0001.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: mdt.lustre-MDT0000.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: jobid_var=TESTNAME umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 123F (72s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 123G: clear and reset all parameters using apply_yaml ========================================================== 04:51:39 (1713430299) start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre oleg228-server: rm: cannot remove '/tmp/f123G.conf-sanity.yaml': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Unmounting FS Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Writeconf checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Remounting start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Setting configuration parameters conf_param: lustre-MDT0001.mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity conf_param: lustre-MDT0000.mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: mdt.lustre-MDT0001.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: mdt.lustre-MDT0000.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity set_param: jobid_var=TESTNAME umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 123G (91s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 124: check failover after replace_nids ========================================================== 04:53:11 (1713430391) SKIP: conf-sanity test_124 needs MDT failover setup SKIP 124 (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 125: check l_tunedisk only tunes OSTs and their slave devices ========================================================== 04:53:14 (1713430394) Before: mgs /dev/mapper/mds1_flakey 511 2147483647 After: mgs /dev/mapper/mds1_flakey 511 2147483647 Before: ost1 /dev/mapper/ost1_flakey 16383 2147483647 oleg228-server: l_tunedisk: increased '/sys/devices/virtual/block/dm-2/queue/max_sectors_kb' from 16383 to 16384 After: ost1 /dev/mapper/ost1_flakey 16384 2147483647 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 125 (11s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 126: mount in parallel shouldn't cause a crash ========================================================== 04:53:27 (1713430407) umount lustre on /mnt/lustre..... stop ost1 service on oleg228-server stop mds service on oleg228-server stop mds service on oleg228-server LNET unconfigure error 22: (null) unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local oleg228-server: LNET unconfigure error 22: (null) modules unloaded. oleg228-server: oleg228-server.virtnet: executing load_module ../libcfs/libcfs/libcfs fail_loc=0x60d oleg228-server: oleg228-server.virtnet: executing load_modules oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 clearing fail_loc on mds1 fail_loc=0 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 PASS 126 (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 127: direct io overwrite on full ost ========================================================== 04:54:02 (1713430442) umount lustre on /mnt/lustre..... stop ost1 service on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. start mds service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Stopping clients: /mnt/lustre (opts:) pdsh@oleg228-client: no remote hosts specified check osc.lustre-OST0000-osc-MDT0000.active target updated after 0 sec (got 1) check osc.lustre-OST0000-osc-MDT0001.active target updated after 0 sec (got 1) dd: error writing '/mnt/lustre/f127.conf-sanity': No space left on device 124+0 records in 123+0 records out 128974848 bytes (129 MB) copied, 3.09561 s, 41.7 MB/s 123+0 records in 123+0 records out 128974848 bytes (129 MB) copied, 2.89386 s, 44.6 MB/s PASS 127 (51s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 128: Force using remote logs with --nolocallogs ========================================================== 04:54:55 (1713430495) SKIP: conf-sanity test_128 need separate mgs device SKIP 128 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 129: attempt to connect an OST with the same index should fail ========================================================== 04:54:57 (1713430497) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid Format ost1: /dev/mapper/ost1_flakey Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg228-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Address already in use oleg228-server: The target service's index is already in use. (/dev/mapper/ost1_flakey) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 98 oleg228-server: error: set_param: param_path 'seq/cli-lustre:OST0000-super/width': No such file or directory oleg228-server: error: set_param: setting 'seq/cli-lustre:OST0000-super/width'='65536': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 98 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg228-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Address already in use oleg228-server: The target service's index is already in use. (/dev/mapper/ost1_flakey) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 98 oleg228-server: error: set_param: param_path 'seq/cli-lustre:OST0000-super/width': No such file or directory oleg228-server: error: set_param: setting 'seq/cli-lustre:OST0000-super/width'='65536': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 98 checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x22 (OST first_time ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x122 (OST first_time writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 Writing CONFIGS/mountdata Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 129 (50s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 130: re-register an MDT after writeconf ========================================================== 04:55:48 (1713430548) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6d0a000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6d0a000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 4s: want 'procname_uid' got 'procname_uid' disable quota as required stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Writing CONFIGS/mountdata start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 PASS 130 (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 131: MDT backup restore with project ID ========================================================== 04:56:33 (1713430593) oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg228-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg228-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of /dev/mapper/ost2_flakey on ost2 failed 17 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre mount.lustre: according to /etc/mtab oleg228-server@tcp:/lustre is already mounted on /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6d0a000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6d0a000.idle_timeout=debug disable quota as required striped dir -i1 -c2 -H crush /mnt/lustre/d131.conf-sanity total: 512 open/close in 0.82 seconds: 622.96 ops/second striped dir -i1 -c2 -H crush /mnt/lustre/d131.conf-sanity.inherit total: 128 open/close in 0.26 seconds: 495.85 ops/second Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server file-level backup/restore on mds1:/dev/mapper/mds1_flakey backup data reformat new device Format mds1: /dev/mapper/mds1_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' file-level backup/restore on mds2:/dev/mapper/mds2_flakey backup data reformat new device Format mds2: /dev/mapper/mds2_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012ffd3000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012ffd3000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' disable quota as required PASS 131 (111s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 132: hsm_actions processed after failover ========================================================== 04:58:25 (1713430705) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg228-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x45 (MDT MGS update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity mdt.hsm_control=enabled Writing CONFIGS/mountdata start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server PASS 132 (61s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 133: stripe QOS: free space balance in a pool ========================================================== 04:59:28 (1713430768) SKIP: conf-sanity test_133 needs >= 4 OSTs SKIP 133 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 134: check_iam works without faults == 04:59:30 (1713430770) dd if=/dev/urandom of=/tmp/d134.conf-sanity/oi.16.61 bs=2 conv=notrunc count=1 seek=32 Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS 0 debugfs 1.46.2.wc5 (26-Mar-2022) /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS dd if=/dev/urandom of=/tmp/d134.conf-sanity/oi.16.62 bs=2 conv=notrunc count=1 seek=13 Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS 0 debugfs 1.46.2.wc5 (26-Mar-2022) /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS dd if=/dev/urandom of=/tmp/d134.conf-sanity/oi.16.63 bs=2 conv=notrunc count=1 seek=15 Filesize 8192, blocks count 2 Root format LFIX,Idle blocks block number 0 keysize 16, recsize 8, ptrsize 4, indirect_levels 0 count 2, limit 203 key:00000000000000000000000000000000, ptr: 1 Block 1,FIX leaf,Leaf block, count 1, limit 170 count 1 NO ERRORS 0 PASS 134 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 135: check the behavior when changelog is wrapped around ========================================================== 04:59:51 (1713430791) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre oleg228-client: fail_loc=0x1312 oleg228-client: fail_val=5 oleg228-server: fail_loc=0x1312 oleg228-server: fail_val=5 striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d135.conf-sanity mdd.lustre-MDT0000.changelog_mask=ALL mdd.lustre-MDT0001.changelog_mask=ALL mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' Wrap arround changelog catalog total: 4500 open/close in 8.69 seconds: 517.77 ops/second lustre-MDT0000: clear the changelog for cl1 to record #12998 total: 4500 /unlink in 4.92 seconds: 914.64 ops/second lustre-MDT0000: clear the changelog for cl1 to record #25998 total: 4500 open/close in 8.88 seconds: 506.61 ops/second total: 4500 /unlink in 4.67 seconds: 963.28 ops/second lustre-MDT0000: clear the changelog for cl1 to record #38998 total: 4500 open/close in 8.66 seconds: 519.40 ops/second lustre-MDT0000: clear the changelog for cl1 to record #51998 total: 4500 /unlink in 4.66 seconds: 965.65 ops/second kill changelog reader /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 10632: 7586 Terminated coproc COPROC $LFS changelog --follow $service (wd: ~) lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 Cleanup test_135 umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. PASS 135 (110s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 136: don't panic with bad obdecho setup ========================================================== 05:01:43 (1713430903) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre oleg228-server: error: setup: Invalid argument pdsh@oleg228-client: oleg228-server: ssh exited with exit code 22 oleg228-server: error: test_mkdir: No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 136 (105s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 140: remove_updatelog script actions ========================================================== 05:03:30 (1713431010) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre striped dir -i0 -c2 -H crush2 /mnt/lustre/d140.conf-sanity stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Dry run was requested, no changes will be applied Scan update_log at '/mnt/lustre-mds2': Selected MDTS: 0 1 Processing MDT0 llog catalog [0x240000401:0x1:0x0] ... rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x2:0x0] rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x3:0x0] > /mnt/lustre-mds2/update_log_dir/[0x240000401:0x1:0x0] Processing MDT1 llog catalog [0x240000400:0x1:0x0] ... remove_updatelog: /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] is too small. > /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] Dry run was requested, no changes will be applied Scan update_log at '/mnt/lustre-mds2': Selected MDTS: 1 0 Processing MDT1 llog catalog [0x240000400:0x1:0x0] ... remove_updatelog: /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] is too small. > /mnt/lustre-mds2/update_log_dir/[0x240000400:0x1:0x0] Processing MDT0 llog catalog [0x240000401:0x1:0x0] ... rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x2:0x0] rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x3:0x0] > /mnt/lustre-mds2/update_log_dir/[0x240000401:0x1:0x0] Scan update_log at '/mnt/lustre-mds2': Selected MDTS: 0 Processing MDT0 llog catalog [0x240000401:0x1:0x0] ... rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x2:0x0] rm -f /mnt/lustre-mds2/update_log_dir/[0x240000bd0:0x3:0x0] > /mnt/lustre-mds2/update_log_dir/[0x240000401:0x1:0x0] start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 2 sec oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 4 sec Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:) Stopping client oleg228-client.virtnet /mnt/lustre opts: Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 3 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 140 (228s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 150: test setting max_cached_mb to a % ========================================================== 05:07:20 (1713431240) start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre llite.lustre-ffff88012ffd2800.max_cached_mb=100% llite.lustre-ffff88012ffd2800.max_cached_mb= users: 5 max_cached_mb: 3730 used_mb: 0 unused_mb: 3730 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 total ram mb: 3730 llite.lustre-ffff88012ffd2800.max_cached_mb=50% llite.lustre-ffff88012ffd2800.max_cached_mb= users: 5 max_cached_mb: 1865 used_mb: 0 unused_mb: 1865 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 error: set_param: setting /sys/kernel/debug/lustre/llite/lustre-ffff88012ffd2800/max_cached_mb=105%: Numerical result out of range error: set_param: setting 'llite/*/max_cached_mb'='105%': Numerical result out of range llite.lustre-ffff88012ffd2800.max_cached_mb=0% llite.lustre-ffff88012ffd2800.max_cached_mb= users: 5 max_cached_mb: 64 used_mb: 0 unused_mb: 64 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 PASS 150 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 151: damaged local config doesn't prevent mounting ========================================================== 05:07:41 (1713431261) umount lustre on /mnt/lustre..... Stopping client oleg228-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. Damage ost1 local config log oleg228-server: debugfs 1.46.2.wc5 (26-Mar-2022) start ost1 service on oleg228-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg228-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: No such file or directory oleg228-server: Is the MGS specification correct? oleg228-server: Is the filesystem name correct? oleg228-server: If upgrading, is the copied client log valid? (see upgrade docs) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'seq/cli-lustre-OST0000-super/width': No such file or directory oleg228-server: error: set_param: setting 'seq/cli-lustre-OST0000-super/width'='65536': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 Start of /dev/mapper/ost1_flakey on ost1 failed 2 start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 1 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server PASS 151 (164s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 152: seq allocation error in OSP ===== 05:10:27 (1713431427) Checking servers environments Checking clients oleg228-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Starting client oleg228-client.virtnet: -o user_xattr,flock oleg228-server@tcp:/lustre /mnt/lustre Started clients oleg228-client.virtnet: 192.168.202.128@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012a61e000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012a61e000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 6s: want 'procname_uid' got 'procname_uid' disable quota as required striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d152.conf-sanity ADD OST3 Permanent disk data: Target: lustre:OST0003 Index: 3 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.202.128@tcp sys.timeout=20 formatting backing filesystem ldiskfs on /dev/loop0 target name lustre:OST0003 kilobytes 200000 options -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F mkfs_cmd = mke2fs -j -b 4096 -L lustre:OST0003 -b 4096 -I 512 -q -O uninit_bg,extents,dir_nlink,quota,project,huge_file,large_dir,^fast_commit,flex_bg -G 256 -E lazy_itable_init,resize="4290772992",lazy_journal_init,packed_meta_blocks -F /dev/loop0 200000k Writing CONFIGS/mountdata fail_loc=0x80002109 fail_val=2 START OST3 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Starting ost3: -o localrecov /dev/mapper/ost3_flakey /mnt/lustre-ost3 seq.cli-lustre-OST0003-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all STOP OST3 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /tmp/lustre-ost3 Stopping /mnt/lustre-ost3 (opts:) on oleg228-server 4107 Started lustre-OST0003 fail_loc=0 START OST3 again Starting ost3: -o localrecov /dev/mapper/ost3_flakey /mnt/lustre-ost3 seq.cli-lustre-OST0003-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0003 /mnt/lustre/d152.conf-sanity/f152.conf-sanity-2 lmm_magic: 0x0BD10BD0 lmm_seq: 0x240000bd0 lmm_object_id: 0x3 lmm_fid: [0x240000bd0:0x3:0x0] lmm_stripe_count: 3 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 3 obdidx objid objid group 3 2 0x2 0x300000bd0 0 35 0x23 0x280000400 1 3 0x3 0x2c0000400 Stopping /mnt/lustre-ost3 (opts:) on oleg228-server PASS 152 (59s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == conf-sanity test 153a: bypass invalid NIDs quickly ==== 05:11:28 (1713431488) Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping client oleg228-client.virtnet /mnt/lustre opts:-f Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg228-server oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds1_flakey Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/mds2_flakey Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost1_flakey Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg228-server: oleg228-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg228-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server start mds service on oleg228-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg228-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg228-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg228-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg228-server: oleg228-server.virtnet: executing set_default_debug -1 all pdsh@oleg228-client: oleg228-server: ssh exited with exit code 1 Started lustre-OST0000 oleg228-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid waiting for mount ... "192.168.202.128@tcp": { connects: 1, replied: 1, uptodate: true, sec_ago: 5 } "192.168.252.112@tcp": { connects: 0, replied: 0, uptodate: false, sec_ago: never } "10.252.252.113@tcp": { connects: 0, replied: 0, uptodate: false, sec_ago: never } "192.168.202.128@tcp": { connects: 0, replied: 0, uptodate: false, sec_ago: never } setup single mount lustre success umount lustre on /mnt/lustre..... stop ost1 service on oleg228-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg228-server stop mds service on oleg228-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg228-server unloading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-client: ssh exited with exit code 2 pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 PASS 153a (214s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory error: get_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory oleg228-server: error: get_param: param_path 'debug_raw_pointers': No such file or directory pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='Y': No such file or directory == conf-sanity test 802a: simulate readonly device ======= 05:15:04 (1713431704) SKIP: conf-sanity test_802a ZFS specific test SKIP 802a (1s) error: set_param: param_path 'debug_raw_pointers': No such file or directory error: set_param: setting 'debug_raw_pointers'='0': No such file or directory oleg228-server: error: set_param: param_path 'debug_raw_pointers': No such file or directory oleg228-server: error: set_param: setting 'debug_raw_pointers'='0': No such file or directory Stopping clients: oleg228-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg228-client.virtnet /mnt/lustre2 (opts:-f) pdsh@oleg228-client: oleg228-server: ssh exited with exit code 2 oleg228-server: oleg228-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg228-server' oleg228-server: oleg228-server.virtnet: executing load_modules_local oleg228-server: Loading modules from /home/green/git/lustre-release/lustre oleg228-server: detected 4 online CPUs by sysfs oleg228-server: Force libcfs to create 2 CPU partitions oleg228-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg228-server: quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /dev/mapper/mds1_flakey Format mds2: /dev/mapper/mds2_flakey Format ost1: /dev/mapper/ost1_flakey Format ost2: /dev/mapper/ost2_flakey == conf-sanity test complete, duration 6715 sec ========== 05:15:25 (1713431725) === conf-sanity: start cleanup 05:15:26 (1713431726) === === conf-sanity: finish cleanup 05:15:26 (1713431726) ===