== recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 07:12:45 (1713438765) ldlm.namespaces.MGC192.168.203.122@tcp.early_lock_cancel=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9b90000.early_lock_cancel=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800a9b90000.early_lock_cancel=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800a9b90000.early_lock_cancel=0 fail_loc=0x80000305 fail_loc=0 fail_val=0 ldlm.namespaces.MGC192.168.203.122@tcp.early_lock_cancel=1 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9b90000.early_lock_cancel=1 ldlm.namespaces.lustre-OST0000-osc-ffff8800a9b90000.early_lock_cancel=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800a9b90000.early_lock_cancel=1 fail_loc=0x8000012f Failing mds1 on oleg322-server Stopping /mnt/lustre-mds1 (opts:) on oleg322-server 07:13:10 (1713438790) shut down Failover mds1 to oleg322-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg322-server: oleg322-server.virtnet: executing set_default_debug -1 all pdsh@oleg322-client: oleg322-server: ssh exited with exit code 1 Started lustre-MDT0000 07:13:22 (1713438802) targets are mounted 07:13:22 (1713438802) facet_failover done oleg322-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec