== conf-sanity test 57b: initial registration from servicenode should not fail ========================================================== 16:24:29 (1713299069) oleg424-server: oleg424-server.virtnet: executing load_modules_local oleg424-server: Loading modules from /home/green/git/lustre-release/lustre oleg424-server: detected 4 online CPUs by sysfs oleg424-server: Force libcfs to create 2 CPU partitions oleg424-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg424-server: quota/lquota options: 'hash_lqs_cur_bits=3' oleg424-server: oleg424-server: tunefs.lustre FATAL: Device lustre-mdt1/mdt1 has not been formatted with mkfs.lustre oleg424-server: tunefs.lustre: exiting with 19 (No such device) pdsh@oleg424-client: oleg424-server: ssh exited with exit code 19 checking for existing Lustre data: not found oleg424-server: oleg424-server: tunefs.lustre FATAL: Device lustre-ost1/ost1 has not been formatted with mkfs.lustre oleg424-server: tunefs.lustre: exiting with 19 (No such device) pdsh@oleg424-client: oleg424-server: ssh exited with exit code 19 checking for existing Lustre data: not found oleg424-server: oleg424-server: tunefs.lustre FATAL: Device lustre-ost2/ost2 has not been formatted with mkfs.lustre oleg424-server: tunefs.lustre: exiting with 19 (No such device) pdsh@oleg424-client: oleg424-server: ssh exited with exit code 19 checking for existing Lustre data: not found tunefs failed, reformatting instead Stopping clients: oleg424-client.virtnet /mnt/lustre (opts:-f) Stopping clients: oleg424-client.virtnet /mnt/lustre2 (opts:-f) oleg424-server: oleg424-server.virtnet: executing set_hostid Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg424-server' oleg424-server: oleg424-server.virtnet: executing load_modules_local oleg424-server: Loading modules from /home/green/git/lustre-release/lustre oleg424-server: detected 4 online CPUs by sysfs oleg424-server: Force libcfs to create 2 CPU partitions oleg424-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Formatting mgs, mds, osts Format mds1: lustre-mdt1/mdt1 Format ost1: lustre-ost1/ost1 Format ost2: lustre-ost2/ost2 start mds service on oleg424-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg424-server: oleg424-server.virtnet: executing set_default_debug -1 all pdsh@oleg424-client: oleg424-server: ssh exited with exit code 1 Commit the device label on lustre-mdt1/mdt1 Started lustre-MDT0000 oleg424-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg424-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg424-server: oleg424-server.virtnet: executing set_default_debug -1 all pdsh@oleg424-client: oleg424-server: ssh exited with exit code 1 Commit the device label on lustre-ost1/ost1 Started lustre-OST0000 oleg424-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg424-server: oleg424-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg424-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec stop ost1 service on oleg424-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg424-server stop mds service on oleg424-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg424-server checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x2 (OST ) Persistent mount opts: Parameters: sys.timeout=20 autodegrade=on mgsnode=192.168.204.124@tcp Permanent disk data: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: zfs Flags: 0x1042 (OST update no_primnode ) Persistent mount opts: Parameters: sys.timeout=20 autodegrade=on mgsnode=192.168.204.124@tcp failover.node=192.168.204.124@tcp autodegrade=on Writing lustre-ost1/ost1 properties lustre:sys.timeout=20 lustre:autodegrade=on lustre:mgsnode=192.168.204.124@tcp lustre:failover.node=192.168.204.124@tcp lustre:autodegrade=on lustre:version=1 lustre:flags=4162 lustre:index=0 lustre:fsname=lustre lustre:svname=lustre-OST0000 start mds service on oleg424-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg424-server: oleg424-server.virtnet: executing set_default_debug -1 all pdsh@oleg424-client: oleg424-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg424-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg424-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg424-server: oleg424-server.virtnet: executing set_default_debug -1 all pdsh@oleg424-client: oleg424-server: ssh exited with exit code 1 Started lustre-OST0000 oleg424-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid umount lustre on /mnt/lustre..... stop ost1 service on oleg424-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg424-server stop mds service on oleg424-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg424-server unloading modules on: 'oleg424-server' oleg424-server: oleg424-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg424-client: oleg424-client: ssh exited with exit code 2 pdsh@oleg424-client: oleg424-server: ssh exited with exit code 2 pdsh@oleg424-client: oleg424-client: ssh exited with exit code 2 pdsh@oleg424-client: oleg424-server: ssh exited with exit code 2