== conf-sanity test 131: MDT backup restore with project ID ========================================================== 17:11:31 (1713301891) oleg107-server: debugfs 1.46.2.wc5 (26-Mar-2022) Checking servers environments Checking clients oleg107-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg107-server' oleg107-server: oleg107-server.virtnet: executing load_modules_local oleg107-server: Loading modules from /home/green/git/lustre-release/lustre oleg107-server: detected 4 online CPUs by sysfs oleg107-server: Force libcfs to create 2 CPU partitions oleg107-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg107-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg107-client: oleg107-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg107-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg107-client: oleg107-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg107-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg107-client: oleg107-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg107-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg107-client: oleg107-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of /dev/mapper/ost2_flakey on ost2 failed 17 mount lustre on /mnt/lustre..... Starting client: oleg107-client.virtnet: -o user_xattr,flock oleg107-server@tcp:/lustre /mnt/lustre mount.lustre: according to /etc/mtab oleg107-server@tcp:/lustre is already mounted on /mnt/lustre Starting client oleg107-client.virtnet: -o user_xattr,flock oleg107-server@tcp:/lustre /mnt/lustre Started clients oleg107-client.virtnet: 192.168.201.107@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800aa3af000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800aa3af000.idle_timeout=debug disable quota as required striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d131.conf-sanity total: 512 open/close in 2.04 seconds: 250.96 ops/second striped dir -i1 -c2 -H all_char /mnt/lustre/d131.conf-sanity.inherit total: 128 open/close in 0.54 seconds: 236.48 ops/second Stopping clients: oleg107-client.virtnet /mnt/lustre (opts:) Stopping client oleg107-client.virtnet /mnt/lustre opts: Stopping clients: oleg107-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg107-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg107-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg107-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg107-server file-level backup/restore on mds1:/dev/mapper/mds1_flakey backup data reformat new device Format mds1: /dev/mapper/mds1_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' file-level backup/restore on mds2:/dev/mapper/mds2_flakey backup data reformat new device Format mds2: /dev/mapper/mds2_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' Checking servers environments Checking clients oleg107-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg107-server' oleg107-server: oleg107-server.virtnet: executing load_modules_local oleg107-server: Loading modules from /home/green/git/lustre-release/lustre oleg107-server: detected 4 online CPUs by sysfs oleg107-server: Force libcfs to create 2 CPU partitions oleg107-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg107-server: oleg107-server.virtnet: executing set_default_debug -1 all pdsh@oleg107-client: oleg107-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg107-server: oleg107-server.virtnet: executing set_default_debug -1 all pdsh@oleg107-client: oleg107-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg107-server: oleg107-server.virtnet: executing set_default_debug -1 all pdsh@oleg107-client: oleg107-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg107-server: oleg107-server.virtnet: executing set_default_debug -1 all pdsh@oleg107-client: oleg107-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg107-client.virtnet: -o user_xattr,flock oleg107-server@tcp:/lustre /mnt/lustre Starting client oleg107-client.virtnet: -o user_xattr,flock oleg107-server@tcp:/lustre /mnt/lustre Started clients oleg107-client.virtnet: 192.168.201.107@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800aae1f000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800aae1f000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' disable quota as required