== sanity-quota test 35: Usage is still accessible across reboot ========================================================== 09:47:06 (1713534426) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Creating test directory fail_val=0 fail_loc=0 Write file... lfs project -p 1000 /mnt/lustre/d35.sanity-quota/f35.sanity-quota Wait for setattr on objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Save disk usage before restart User 60000: 2052KB 1 inodes Group 60000: 2052KB 1 inodes Project 1000: 2052KB 1 inodes Restart... Stopping clients: oleg451-client.virtnet /mnt/lustre (opts:) Stopping client oleg451-client.virtnet /mnt/lustre opts: Stopping clients: oleg451-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg451-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg451-server Checking servers environments Checking clients oleg451-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg451-server' oleg451-server: oleg451-server.virtnet: executing load_modules_local oleg451-server: Loading modules from /home/green/git/lustre-release/lustre oleg451-server: detected 4 online CPUs by sysfs oleg451-server: Force libcfs to create 2 CPU partitions oleg451-server: libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg451-server: oleg451-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg451-client: oleg451-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Starting client oleg451-client.virtnet: -o user_xattr,flock oleg451-server@tcp:/lustre /mnt/lustre Started clients oleg451-client.virtnet: 192.168.204.151@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800a89b5000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800a89b5000.idle_timeout=debug affected facets: Verify disk usage after restart Append to the same file... Verify space usage is increased Delete files... Wait for unlink objects finished... sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete