== sanity-sec test 25: test save and reload nodemap config ========================================================== 05:42:55 (1713433375) Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:) Stopping client oleg145-client.virtnet /mnt/lustre opts: mdt.lustre-MDT0000.identity_upcall=NONE mdt.lustre-MDT0001.identity_upcall=NONE On MGS 192.168.201.145, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.145, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.145, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.145, c0.trusted_nodemap = nodemap.c0.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.145, test25.id = nodemap.test25.id=41 waiting 10 secs for sync === sanity-sec: start setup 05:43:56 (1713433436) === Checking servers environments Checking clients oleg145-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg145-server' oleg145-server: oleg145-server.virtnet: executing load_modules_local oleg145-server: Loading modules from /home/green/git/lustre-release/lustre oleg145-server: detected 4 online CPUs by sysfs oleg145-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg145-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg145-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg145-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg145-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg145-client: oleg145-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of /dev/mapper/ost2_flakey on ost2 failed 17 Starting client: oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012b4b6800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012b4b6800.idle_timeout=debug disable quota as required osd-ldiskfs.track_declares_assert=1 === sanity-sec: finish setup 05:44:21 (1713433461) === Stopping clients: oleg145-client.virtnet /mnt/lustre (opts:) Stopping client oleg145-client.virtnet /mnt/lustre opts: On MGS 192.168.201.145, default.admin_nodemap = nodemap.default.admin_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.145, default.trusted_nodemap = nodemap.default.trusted_nodemap=0 waiting 10 secs for sync On MGS 192.168.201.145, active = nodemap.active=0 waiting 10 secs for sync Starting client oleg145-client.virtnet: -o user_xattr,flock oleg145-server@tcp:/lustre /mnt/lustre Started clients oleg145-client.virtnet: 192.168.201.145@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project)