== recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 19:16:07 (1713482167) ldlm.namespaces.MGC192.168.204.146@tcp.early_lock_cancel=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff8801368d1000.early_lock_cancel=0 ldlm.namespaces.lustre-OST0000-osc-ffff8801368d1000.early_lock_cancel=0 ldlm.namespaces.lustre-OST0001-osc-ffff8801368d1000.early_lock_cancel=0 fail_loc=0x80000305 fail_loc=0 fail_val=0 ldlm.namespaces.MGC192.168.204.146@tcp.early_lock_cancel=1 ldlm.namespaces.lustre-MDT0000-mdc-ffff8801368d1000.early_lock_cancel=1 ldlm.namespaces.lustre-OST0000-osc-ffff8801368d1000.early_lock_cancel=1 ldlm.namespaces.lustre-OST0001-osc-ffff8801368d1000.early_lock_cancel=1 fail_loc=0x8000012f Failing mds1 on oleg446-server Stopping /mnt/lustre-mds1 (opts:) on oleg446-server 19:16:27 (1713482187) shut down Failover mds1 to oleg446-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg446-server: oleg446-server.virtnet: executing set_default_debug -1 all pdsh@oleg446-client: oleg446-server: ssh exited with exit code 1 Started lustre-MDT0000 19:16:40 (1713482200) targets are mounted 19:16:40 (1713482200) facet_failover done oleg446-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec