== conf-sanity test 57a: initial registration from failnode should fail (should return errs) ========================================================== 06:42:24 (1713523344) oleg342-server: oleg342-server.virtnet: executing load_modules_local oleg342-server: Loading modules from /home/green/git/lustre-release/lustre oleg342-server: detected 4 online CPUs by sysfs oleg342-server: Force libcfs to create 2 CPU partitions oleg342-server: oleg342-server: tunefs.lustre FATAL: Device lustre-mdt1/mdt1 has not been formatted with mkfs.lustre oleg342-server: tunefs.lustre: exiting with 19 (No such device) pdsh@oleg342-client: oleg342-server: ssh exited with exit code 19 checking for existing Lustre data: not found oleg342-server: oleg342-server: tunefs.lustre FATAL: Device lustre-ost1/ost1 has not been formatted with mkfs.lustre oleg342-server: tunefs.lustre: exiting with 19 (No such device) pdsh@oleg342-client: oleg342-server: ssh exited with exit code 19 checking for existing Lustre data: not found oleg342-server: oleg342-server: tunefs.lustre FATAL: Device lustre-ost2/ost2 has not been formatted with mkfs.lustre oleg342-server: tunefs.lustre: exiting with 19 (No such device) pdsh@oleg342-client: oleg342-server: ssh exited with exit code 19 checking for existing Lustre data: not found tunefs failed, reformatting instead Stopping clients: oleg342-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg342-client.virtnet /mnt/lustre2 (opts:-f) oleg342-server: oleg342-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg342-server' oleg342-server: oleg342-server.virtnet: executing load_modules_local oleg342-server: Loading modules from /home/green/git/lustre-release/lustre oleg342-server: detected 4 online CPUs by sysfs oleg342-server: Force libcfs to create 2 CPU partitions Formatting mgs, mds, osts Format mds1: lustre-mdt1/mdt1 Format ost1: lustre-ost1/ost1 Format ost2: lustre-ost2/ost2 start mds service on oleg342-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg342-server: oleg342-server.virtnet: executing set_default_debug -1 all pdsh@oleg342-client: oleg342-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1/mdt1 Started lustre-MDT0000 oleg342-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg342-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg342-server: oleg342-server.virtnet: executing set_default_debug -1 all pdsh@oleg342-client: oleg342-server: ssh exited with exit code 1 Commit the device label on lustre-ost1/ost1 Started lustre-OST0000 oleg342-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg342-server: oleg342-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg342-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg342-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg342-server stop mds service on oleg342-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg342-server checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x2 (OST ) Persistent mount opts: Parameters: mgsnode=192.168.203.142@tcp autodegrade=on sys.timeout=20 Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x42 (OST update ) Persistent mount opts: Parameters: mgsnode=192.168.203.142@tcp autodegrade=on sys.timeout=20 failover.node=192.168.203.142@tcp autodegrade=on Writing lustre-ost1/ost1 properties lustre:mgsnode=192.168.203.142@tcp lustre:autodegrade=on lustre:sys.timeout=20 lustre:failover.node=192.168.203.142@tcp lustre:autodegrade=on lustre:version=1 lustre:flags=66 lustre:index=0 lustre:fsname=lustre lustre:svname=lustre-OST0000 start mds service on oleg342-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg342-server: oleg342-server.virtnet: executing set_default_debug -1 all pdsh@oleg342-client: oleg342-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg342-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg342-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg342-server: mount.lustre: mount lustre-ost1/ost1 at /mnt/lustre-ost1 failed: Cannot assign requested address pdsh@oleg342-client: oleg342-server: ssh exited with exit code 99 oleg342-server: error: set_param: param_path 'seq/cli-lustre-OST0000-super/width': No such file or directory oleg342-server: error: set_param: setting 'seq/cli-lustre-OST0000-super/width'='65536': No such file or directory pdsh@oleg342-client: oleg342-server: ssh exited with exit code 2 Start of lustre-ost1/ost1 on ost1 failed 99 umount lustre on /mnt/lustre..... stop ost1 service on oleg342-server stop mds service on oleg342-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg342-server unloading modules on: 'oleg342-server' oleg342-server: oleg342-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg342-client: oleg342-client: ssh exited with exit code 2 pdsh@oleg342-client: oleg342-server: ssh exited with exit code 2 pdsh@oleg342-client: oleg342-client: ssh exited with exit code 2 pdsh@oleg342-client: oleg342-server: ssh exited with exit code 2