== conf-sanity test 24b: Multiple MGSs on a single node (should return err) ========================================================== 03:52:17 (1713426737) pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Permanent disk data: Target: lustre2:MDT0000 Index: 0 Lustre FS: lustre2 Mount type: zfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: Parameters: mgsnode=192.168.203.106@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity mkfs_cmd = zpool create -f -O canmount=off lustre-mdt1_2 /tmp/lustre-mdt1_2 mkfs_cmd = zfs create -o canmount=off -o quota=409600000 lustre-mdt1_2/mdt1_2 xattr=sa dnodesize=auto Writing lustre-mdt1_2/mdt1_2 properties lustre:mgsnode=192.168.203.106@tcp lustre:sys.timeout=20 lustre:mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity lustre:version=1 lustre:flags=101 lustre:index=0 lustre:fsname=lustre2 lustre:svname=lustre2:MDT0000 start mds service on oleg306-server Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing load_modules_local oleg306-server: Loading modules from /home/green/git/lustre-release/lustre oleg306-server: detected 4 online CPUs by sysfs oleg306-server: Force libcfs to create 2 CPU partitions oleg306-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg306-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg306-server: quota/lquota options: 'hash_lqs_cur_bits=3' Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg306-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid start ost1 service on oleg306-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg306-server: oleg306-server.virtnet: executing set_default_debug -1 all pdsh@oleg306-client: oleg306-server: ssh exited with exit code 1 Started lustre-OST0000 oleg306-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid mount lustre on /mnt/lustre..... Starting client: oleg306-client.virtnet: -o user_xattr,flock oleg306-server@tcp:/lustre /mnt/lustre Starting fs2mds: -o localrecov lustre-mdt1_2/mdt1_2 /mnt/lustre-fs2mds oleg306-server: mount.lustre: mount lustre-mdt1_2/mdt1_2 at /mnt/lustre-fs2mds failed: Operation already in progress oleg306-server: The target service is already running. (lustre-mdt1_2/mdt1_2) pdsh@oleg306-client: oleg306-server: ssh exited with exit code 114 Start of lustre-mdt1_2/mdt1_2 on fs2mds failed 114 umount lustre on /mnt/lustre..... Stopping client oleg306-client.virtnet /mnt/lustre (opts:) stop ost1 service on oleg306-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg306-server stop mds service on oleg306-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg306-server unloading modules on: 'oleg306-server' oleg306-server: oleg306-server.virtnet: executing unload_modules_local modules unloaded. pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-client: ssh exited with exit code 2 pdsh@oleg306-client: oleg306-server: ssh exited with exit code 2