== recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 05:05:04 (1713431104) ldlm.namespaces.MGC192.168.204.104@tcp.early_lock_cancel=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a8859000.early_lock_cancel=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800a8859000.early_lock_cancel=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800a8859000.early_lock_cancel=0 fail_loc=0x80000305 fail_loc=0 fail_val=0 ldlm.namespaces.MGC192.168.204.104@tcp.early_lock_cancel=1 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a8859000.early_lock_cancel=1 ldlm.namespaces.lustre-OST0000-osc-ffff8800a8859000.early_lock_cancel=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800a8859000.early_lock_cancel=1 fail_loc=0x8000012f Failing mds1 on oleg404-server Stopping /mnt/lustre-mds1 (opts:) on oleg404-server 05:05:29 (1713431129) shut down Failover mds1 to oleg404-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg404-server: oleg404-server.virtnet: executing set_default_debug -1 all pdsh@oleg404-client: oleg404-server: ssh exited with exit code 1 Started lustre-MDT0000 05:05:41 (1713431141) targets are mounted 05:05:41 (1713431141) facet_failover done oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec