== conf-sanity test 131: MDT backup restore with project ID ========================================================== 12:17:14 (1713284234) oleg120-server: debugfs 1.46.2.wc5 (26-Mar-2022) Checking servers environments Checking clients oleg120-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg120-server' oleg120-server: oleg120-server.virtnet: executing load_modules_local oleg120-server: Loading modules from /home/green/git/lustre-release/lustre oleg120-server: detected 4 online CPUs by sysfs oleg120-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg120-server: mount.lustre: according to /etc/mtab /dev/mapper/mds1_flakey is already mounted on /mnt/lustre-mds1 pdsh@oleg120-client: oleg120-server: ssh exited with exit code 17 Start of /dev/mapper/mds1_flakey on mds1 failed 17 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg120-server: mount.lustre: according to /etc/mtab /dev/mapper/mds2_flakey is already mounted on /mnt/lustre-mds2 pdsh@oleg120-client: oleg120-server: ssh exited with exit code 17 Start of /dev/mapper/mds2_flakey on mds2 failed 17 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg120-server: mount.lustre: according to /etc/mtab /dev/mapper/ost1_flakey is already mounted on /mnt/lustre-ost1 pdsh@oleg120-client: oleg120-server: ssh exited with exit code 17 seq.cli-lustre-OST0000-super.width=65536 Start of /dev/mapper/ost1_flakey on ost1 failed 17 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 oleg120-server: mount.lustre: according to /etc/mtab /dev/mapper/ost2_flakey is already mounted on /mnt/lustre-ost2 pdsh@oleg120-client: oleg120-server: ssh exited with exit code 17 seq.cli-lustre-OST0001-super.width=65536 Start of /dev/mapper/ost2_flakey on ost2 failed 17 mount lustre on /mnt/lustre..... Starting client: oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre mount.lustre: according to /etc/mtab oleg120-server@tcp:/lustre is already mounted on /mnt/lustre Starting client oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre Started clients oleg120-client.virtnet: 192.168.201.120@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b625f000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b625f000.idle_timeout=debug disable quota as required striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d131.conf-sanity total: 512 open/close in 2.10 seconds: 243.58 ops/second striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d131.conf-sanity.inherit total: 128 open/close in 0.52 seconds: 244.06 ops/second Stopping clients: oleg120-client.virtnet /mnt/lustre (opts:) Stopping client oleg120-client.virtnet /mnt/lustre opts: Stopping clients: oleg120-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg120-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg120-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg120-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg120-server file-level backup/restore on mds1:/dev/mapper/mds1_flakey backup data reformat new device Format mds1: /dev/mapper/mds1_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' file-level backup/restore on mds2:/dev/mapper/mds2_flakey backup data reformat new device Format mds2: /dev/mapper/mds2_flakey restore data remove recovery logs removed '/mnt/lustre-brpt/CATALOGS' Checking servers environments Checking clients oleg120-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg120-server' oleg120-server: oleg120-server.virtnet: executing load_modules_local oleg120-server: Loading modules from /home/green/git/lustre-release/lustre oleg120-server: detected 4 online CPUs by sysfs oleg120-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0001 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-OST0001 mount lustre on /mnt/lustre..... Starting client: oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre Starting client oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre Started clients oleg120-client.virtnet: 192.168.201.120@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6f18000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6f18000.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' disable quota as required