-----============= acceptance-small: runtests ============----- Thu Apr 18 20:10:44 EDT 2024 Using GSS shared-key feature === runtests: start setup 20:10:47 (1713485447) === oleg229-client.virtnet: executing check_config_client /mnt/lustre oleg229-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg229-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012ab0a800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012ab0a800.idle_timeout=debug disable quota as required oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 setting all flavor to null found 4 null out of total 5 connections restoring to default flavor... 2 existing rules remove rule: lustre.srpc.flavor.default.cli2mdt=ski remove rule: lustre.srpc.flavor.default.cli2ost=ski Setting sptlrpc rule: lustre.srpc.flavor.default.cli2mdt=ski Setting sptlrpc rule: lustre.srpc.flavor.default.cli2ost=ski checking cli2mdt...found 1/1 ski connections checking cli2ost...found 2/2 ski connections GSS_SK now at default flavor: ski === runtests: finish setup 20:10:57 (1713485457) === osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=1 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=1 Creating to objid 33 on ost lustre-OST0000... Creating to objid 33 on ost lustre-OST0001... total: 33 open/close in 0.13 seconds: 263.99 ops/second total: 33 open/close in 0.13 seconds: 260.08 ops/second osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 debug_raw_pointers=Y debug_raw_pointers=Y == runtests test 1: All Runtests ========================= 20:11:10 (1713485470) usage before starting test UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1760 1285928 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3048 7210992 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 276 1023724 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 262144 367 261777 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 367 261777 1% /mnt/lustre[OST:1] filesystem_summary: 523830 276 523554 1% /mnt/lustre touching /mnt/lustre at Thu Apr 18 20:11:15 EDT 2024 (@1713485475) create an empty file /mnt/lustre/hosts.8815 copying /etc/hosts to /mnt/lustre/hosts.8815 comparing /etc/hosts and /mnt/lustre/hosts.8815 renaming /mnt/lustre/hosts.8815 to /mnt/lustre/hosts.8815.ren copying /etc/hosts to /mnt/lustre/hosts.8815 again truncating /mnt/lustre/hosts.8815 removing /mnt/lustre/hosts.8815 copying /etc/hosts to /mnt/lustre/hosts.8815.2 truncating /mnt/lustre/hosts.8815.2 to 123 bytes creating /mnt/lustre/d1.runtests copying 1000 files from /etc /bin to /mnt/lustre/d1.runtests/etc /bin at Thu Apr 18 20:11:21 EDT 2024 tar: Removing leading `/' from member names tar: Removing leading `/' from hard link targets comparing 1000 newly copied files at Thu Apr 18 20:11:32 EDT 2024 running createmany -d /mnt/lustre/d1.runtests/d 1000 total: 1000 mkdir in 1.22 seconds: 822.62 ops/second finished at Thu Apr 18 20:11:42 EDT 2024 (27) Stopping clients: oleg229-client.virtnet /mnt/lustre (opts:) Stopping client oleg229-client.virtnet /mnt/lustre opts: Stopping clients: oleg229-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg229-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg229-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg229-server Checking servers environments Checking clients oleg229-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg229-server' oleg229-server: oleg229-server.virtnet: executing load_modules_local oleg229-server: Loading modules from /home/green/git/lustre-release/lustre oleg229-server: detected 4 online CPUs by sysfs oleg229-server: Force libcfs to create 2 CPU partitions oleg229-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Starting gss daemon on mds: oleg229-server Starting gss daemon on ost: oleg229-server Loading basic SSK keys on all servers oleg229-server: sptlrpc.gss.rsi_upcall=/home/green/git/lustre-release/lustre/utils/gss/l_getauth Setup mgs, mdt, osts Starting mds1: -o localrecov,skpath=/tmp/test-framework-keys /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg229-client: oleg229-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov,skpath=/tmp/test-framework-keys /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg229-client: oleg229-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov,skpath=/tmp/test-framework-keys /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg229-client: oleg229-server: ssh exited with exit code 1 Started lustre-OST0001 GSS_SK: setting kernel keyring perms Starting client: oleg229-client.virtnet: -o user_xattr,flock,skpath=/tmp/test-framework-keys oleg229-server@tcp:/lustre /mnt/lustre Starting client oleg229-client.virtnet: -o user_xattr,flock,skpath=/tmp/test-framework-keys oleg229-server@tcp:/lustre /mnt/lustre Started clients oleg229-client.virtnet: 192.168.202.129@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012d05b000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012d05b000.idle_timeout=debug disable quota as required Setting sptlrpc rule: lustre.srpc.flavor.default.cli2mdt=ski Setting sptlrpc rule: lustre.srpc.flavor.default.cli2ost=ski checking cli2mdt...found 1/1 ski connections checking cli2ost...found 2/2 ski connections comparing 1000 previously copied files running statmany -s /mnt/lustre/d1.runtests/d 1000 2000 using seed 1084410242 running for 2000 iterations total: 2000 stats in 1 seconds: 2000.000000 stats/second usage after creating all files UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 7112 1280576 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3833116 4012 3603008 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 11604 3595416 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 15616 7198424 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 2329 1021671 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 262144 713 261431 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 715 261429 1% /mnt/lustre[OST:1] filesystem_summary: 525189 2329 522860 1% /mnt/lustre Stopping clients: oleg229-client.virtnet /mnt/lustre (opts:) Stopping client oleg229-client.virtnet /mnt/lustre opts: Stopping clients: oleg229-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg229-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg229-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg229-server Checking servers environments Checking clients oleg229-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/intel_rapl/holders': No such file or directory loading modules on: 'oleg229-server' oleg229-server: oleg229-server.virtnet: executing load_modules_local oleg229-server: Loading modules from /home/green/git/lustre-release/lustre oleg229-server: detected 4 online CPUs by sysfs oleg229-server: Force libcfs to create 2 CPU partitions oleg229-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Starting gss daemon on mds: oleg229-server Starting gss daemon on ost: oleg229-server Loading basic SSK keys on all servers oleg229-server: sptlrpc.gss.rsi_upcall=/home/green/git/lustre-release/lustre/utils/gss/l_getauth Setup mgs, mdt, osts Starting mds1: -o localrecov,skpath=/tmp/test-framework-keys /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg229-client: oleg229-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov,skpath=/tmp/test-framework-keys /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg229-client: oleg229-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov,skpath=/tmp/test-framework-keys /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg229-server: oleg229-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg229-client: oleg229-server: ssh exited with exit code 1 Started lustre-OST0001 GSS_SK: setting kernel keyring perms Starting client: oleg229-client.virtnet: -o user_xattr,flock,skpath=/tmp/test-framework-keys oleg229-server@tcp:/lustre /mnt/lustre Starting client oleg229-client.virtnet: -o user_xattr,flock,skpath=/tmp/test-framework-keys oleg229-server@tcp:/lustre /mnt/lustre Started clients oleg229-client.virtnet: 192.168.202.129@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a7afe800.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a7afe800.idle_timeout=debug disable quota as required Setting sptlrpc rule: lustre.srpc.flavor.default.cli2mdt=ski Setting sptlrpc rule: lustre.srpc.flavor.default.cli2ost=ski checking cli2mdt...found 1/1 ski connections checking cli2ost...found 2/2 ski connections running unlinkmany -d /mnt/lustre/d1.runtests/d 1000 - unlinked 0 (time 1713485641 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second removing /mnt/lustre/d1.runtests renaming /mnt/lustre/hosts.8815.ren to /mnt/lustre/hosts.8815 truncating /mnt/lustre/hosts.8815 removing /mnt/lustre/hosts.8815 verifying /mnt/lustre/hosts.8815.2 is 123 bytes done Waiting for MDT destroys to complete usage after removing all files UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1900 1285788 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3833116 1536 3605484 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1536 3605484 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3072 7210968 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 276 1023724 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 262144 336 261808 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 336 261808 1% /mnt/lustre[OST:1] filesystem_summary: 523892 276 523616 1% /mnt/lustre Space was freed: now 3072kB, was 3048kB. PASS 1 (195s) debug_raw_pointers=0 debug_raw_pointers=0 == runtests test complete, duration 220 sec ============== 20:14:25 (1713485665) === runtests: start cleanup 20:14:26 (1713485666) === === runtests: finish cleanup 20:14:26 (1713485666) ===