== conf-sanity test 130: re-register an MDT after writeconf ========================================================== 12:16:18 (1713284178) Checking servers environments Checking clients oleg120-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg120-server' oleg120-server: oleg120-server.virtnet: executing load_modules_local oleg120-server: Loading modules from /home/green/git/lustre-release/lustre oleg120-server: detected 4 online CPUs by sysfs oleg120-server: Force libcfs to create 2 CPU partitions oleg120-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory oleg120-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory oleg120-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre Starting client oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre Started clients oleg120-client.virtnet: 192.168.201.120@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b625f000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b625f000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 6s: want 'procname_uid' got 'procname_uid' disable quota as required stop mds service on oleg120-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg120-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.201.120@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.201.120@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Writing CONFIGS/mountdata start mds service on oleg120-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0001