== sanity-sec test 25: test save and reload nodemap config ========================================================== 16:47:17 (1713300437) Stopping clients: oleg346-client.virtnet /mnt/lustre (opts:) Stopping client oleg346-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.203.146, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.203.146, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.146, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.146, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.203.146, test25.id = nodemap.test25.id=41 waiting 10 secs for sync === sanity-sec: start setup 16:48:18 (1713300498) === Checking servers environments Checking clients oleg346-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg346-server' oleg346-server: oleg346-server.virtnet: executing load_modules_local oleg346-server: Loading modules from /home/green/git/lustre-release/lustre oleg346-server: detected 4 online CPUs by sysfs oleg346-server: Force libcfs to create 2 CPU partitions oleg346-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg346-server: mount.lustre: according to /etc/mtab lustre-mdt1/mdt1 is already mounted on /mnt/lustre-mds1 pdsh@oleg346-client: oleg346-server: ssh exited with exit code 17 Start of lustre-mdt1/mdt1 on mds1 failed 17 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg346-server: mount.lustre: according to /etc/mtab lustre-ost1/ost1 is already mounted on /mnt/lustre-ost1 pdsh@oleg346-client: oleg346-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of lustre-ost1/ost1 on ost1 failed 17 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 oleg346-server: mount.lustre: according to /etc/mtab lustre-ost2/ost2 is already mounted on /mnt/lustre-ost2 pdsh@oleg346-client: oleg346-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of lustre-ost2/ost2 on ost2 failed 17 Starting client: oleg346-client.virtnet: -o user_xattr,flock oleg346-server@tcp:/lustre /mnt/lustre Starting client oleg346-client.virtnet: -o user_xattr,flock oleg346-server@tcp:/lustre /mnt/lustre Started clients oleg346-client.virtnet: 192.168.203.146@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012aab5000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012aab5000.idle_timeout=debug disable quota as required === sanity-sec: finish setup 16:48:37 (1713300517) === Stopping clients: oleg346-client.virtnet /mnt/lustre (opts:) Stopping client oleg346-client.virtnet /mnt/lustre opts: On MGS 192.168.203.146, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.146, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.203.146, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg346-client.virtnet: -o user_xattr,flock oleg346-server@tcp:/lustre /mnt/lustre Started clients oleg346-client.virtnet: 192.168.203.146@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project)