== sanity-sec test 25: test save and reload nodemap config ========================================================== 04:30:01 (1713429001) Stopping clients: oleg111-client.virtnet /mnt/lustre (opts:) Stopping client oleg111-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE On MGS 192.168.201.111, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.111, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.111, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.111, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.111, test25.id = nodemap.test25.id=41 waiting 10 secs for sync === sanity-sec: start setup 04:31:00 (1713429060) === Checking servers environments Checking clients oleg111-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg111-server' oleg111-server: oleg111-server.virtnet: executing load_modules_local oleg111-server: Loading modules from /home/green/git/lustre-release/lustre oleg111-server: detected 4 online CPUs by sysfs oleg111-server: Force libcfs to create 2 CPU partitions oleg111-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg111-server: mount.lustre: according to /etc/mtab lustre-mdt1/mdt1 is already mounted on /mnt/lustre-mds1 pdsh@oleg111-client: oleg111-server: ssh exited with exit code 17 Start of lustre-mdt1/mdt1 on mds1 failed 17 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg111-server: mount.lustre: according to /etc/mtab lustre-ost1/ost1 is already mounted on /mnt/lustre-ost1 pdsh@oleg111-client: oleg111-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of lustre-ost1/ost1 on ost1 failed 17 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 oleg111-server: mount.lustre: according to /etc/mtab lustre-ost2/ost2 is already mounted on /mnt/lustre-ost2 pdsh@oleg111-client: oleg111-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of lustre-ost2/ost2 on ost2 failed 17 Starting client: oleg111-client.virtnet: -o user_xattr,flock oleg111-server@tcp:/lustre /mnt/lustre Starting client oleg111-client.virtnet: -o user_xattr,flock oleg111-server@tcp:/lustre /mnt/lustre Started clients oleg111-client.virtnet: 192.168.201.111@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6ead000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6ead000.idle_timeout=debug disable quota as required === sanity-sec: finish setup 04:31:22 (1713429082) === Stopping clients: oleg111-client.virtnet /mnt/lustre (opts:) Stopping client oleg111-client.virtnet /mnt/lustre opts: On MGS 192.168.201.111, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.111, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.111, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg111-client.virtnet: -o user_xattr,flock oleg111-server@tcp:/lustre /mnt/lustre Started clients oleg111-client.virtnet: 192.168.201.111@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project)