== conf-sanity test 130: re-register an MDT after writeconf ========================================================== 06:10:49 (1713348649) Checking servers environments Checking clients oleg359-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg359-server' oleg359-server: oleg359-server.virtnet: executing load_modules_local oleg359-server: Loading modules from /home/green/git/lustre-release/lustre oleg359-server: detected 4 online CPUs by sysfs oleg359-server: Force libcfs to create 2 CPU partitions oleg359-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg359-server: oleg359-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg359-client: oleg359-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg359-server: oleg359-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg359-client: oleg359-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg359-server: oleg359-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg359-client: oleg359-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg359-server: oleg359-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg359-client: oleg359-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg359-client.virtnet: -o user_xattr,flock oleg359-server@tcp:/lustre /mnt/lustre Starting client oleg359-client.virtnet: -o user_xattr,flock oleg359-server@tcp:/lustre /mnt/lustre Started clients oleg359-client.virtnet: 192.168.203.159@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012b42b000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012b42b000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 3s: want 'procname_uid' got 'procname_uid' disable quota as required stop mds service on oleg359-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg359-server checking for existing Lustre data: found Read previous values: Target: lustre-MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x1 (MDT ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.203.159@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Permanent disk data: Target: lustre=MDT0001 Index: 1 Lustre FS: lustre Mount type: ldiskfs Flags: 0x101 (MDT writeconf ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.203.159@tcp sys.timeout=20 mdt.identity_upcall=/home/green/git/lustre-release/lustre/utils/l_getidentity Writing CONFIGS/mountdata start mds service on oleg359-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg359-server: oleg359-server.virtnet: executing set_default_debug -1 all 8 pdsh@oleg359-client: oleg359-server: ssh exited with exit code 1 Started lustre-MDT0001