== conf-sanity test 57a: initial registration from failnode should fail (should return errs) ========================================================== 05:42:41 (1713346961) oleg315-server: oleg315-server.virtnet: executing load_modules_local oleg315-server: Loading modules from /home/green/git/lustre-release/lustre oleg315-server: detected 4 online CPUs by sysfs oleg315-server: Force libcfs to create 2 CPU partitions oleg315-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg315-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory checking for existing Lustre data: found Read previous values: Target: lustre-MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x5 (MDT MGS ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x105 (MDT MGS writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x2 (OST ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x62 (OST first_time update ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x162 (OST first_time update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 checking for existing Lustre data: found Read previous values: Target: lustre-OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x102 (OST writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 Permanent disk data: Target: lustre=OST0000 Index: 0 Lustre FS: lustre Mount type: ldiskfs Flags: 0x142 (OST update writeconf ) Persistent mount opts: ,errors=remount-ro Parameters: mgsnode=192.168.203.115@tcp sys.timeout=20 failover.node=192.168.203.115@tcp Writing CONFIGS/mountdata start mds service on oleg315-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg315-server: oleg315-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg315-client: oleg315-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg315-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg315-server: oleg315-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg315-client: oleg315-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg315-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg315-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg315-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg315-server: mount.lustre: mount /dev/mapper/ost1_flakey at /mnt/lustre-ost1 failed: Cannot assign requested address pdsh@oleg315-client: oleg315-server: ssh exited with exit code 99 Start of /dev/mapper/ost1_flakey on ost1 failed 99 umount lustre on /mnt/lustre..... stop ost1 service on oleg315-server stop mds service on oleg315-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg315-server stop mds service on oleg315-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg315-server LNET ready to unload unloading modules on: 'oleg315-server' oleg315-server: oleg315-server.virtnet: executing unload_modules_local oleg315-server: LNET ready to unload modules unloaded. pdsh@oleg315-client: oleg315-client: ssh exited with exit code 2 pdsh@oleg315-client: oleg315-server: ssh exited with exit code 2 pdsh@oleg315-client: oleg315-client: ssh exited with exit code 2 pdsh@oleg315-client: oleg315-server: ssh exited with exit code 2