-----============= acceptance-small: replay-ost-single ============----- Wed Apr 17 17:01:40 EDT 2024 excepting tests: oleg356-client.virtnet: executing check_config_client /mnt/lustre oleg356-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg356-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b5e6c000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b5e6c000.idle_timeout=debug disable quota as required oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 osd-ldiskfs.track_declares_assert=1 /mnt/lustre/d0.replay-ost-single stripe_count: 1 stripe_size: 1048576 pattern: raid0 stripe_offset: 0 == replay-ost-single test 0a: target handle mismatch (bug 5317) ========================================================== 17:01:50 (1713387710) Stopping client oleg356-client.virtnet /mnt/lustre (opts:-f) fail_loc=0x80000211 Starting client: oleg356-client.virtnet: -o user_xattr,flock oleg356-server@tcp:/lustre /mnt/lustre UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1388 3605632 1% /mnt/lustre[OST:0] filesystem_summary: 3833116 1388 3605632 1% /mnt/lustre PASS 0a (12s) == replay-ost-single test 0b: empty replay =============== 17:02:02 (1713387722) Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec PASS 0b (20s) == replay-ost-single test 1: touch ======================= 17:02:22 (1713387742) Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec /mnt/lustre/d0.replay-ost-single/f1.replay-ost-single has type file OK PASS 1 (19s) == replay-ost-single test 2: |x| 10 open(O_CREAT)s ======= 17:02:41 (1713387761) Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec PASS 2 (20s) == replay-ost-single test 3: Fail OST during write, with verification ========================================================== 17:03:01 (1713387781) Failing ost1 on oleg356-server 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.133337 s, 39.3 MB/s Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec PASS 3 (19s) == replay-ost-single test 4: Fail OST during read, with verification ========================================================== 17:03:20 (1713387800) 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.124799 s, 42.0 MB/s Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec PASS 4 (20s) == replay-ost-single test 5: Fail OST during iozone ====== 17:03:40 (1713387820) iozone bg pid=17763 + iozone -i 0 -i 1 -+d -r 4 -s 1048576 -f /mnt/lustre/d0.replay-ost-single/f5.replay-ost-single tmppipe=/tmp/replay-ost-single.test_5.pipe iozone pid=17766 Iozone: Performance Test of File I/O Version $Revision: 3.483 $ Compiled for 64 bit mode. Build: linux-AMD64 Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa, Alexey Skidanov, Sudhir Kumar. Run began: Wed Apr 17 17:03:40 2024 >>> I/O Diagnostic mode enabled. <<< Performance measurements are invalid in this mode. Record Size 4 kB File size set to 1048576 kB Command line used: iozone -i 0 -i 1 -+d -r 4 -s 1048576 -f /mnt/lustre/d0.replay-ost-single/f5.replay-ost-single Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec 1048576 4 23358 37764 700023 689638 iozone test complete. iozone rc=0 PASS 5 (88s) == replay-ost-single test 6: Fail OST before obd_destroy ========================================================== 17:05:08 (1713387908) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg356-server mds-ost sync done. Waiting for MDT destroys to complete 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.200962 s, 26.1 MB/s /mnt/lustre/d0.replay-ost-single/f6.replay-ost-single lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 194 0xc2 0 fail_loc=0x80000119 Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg356-server mds-ost sync done. before_free: 7663440 after_dd_free: 7658320 took 0 seconds Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec affected facets: ost1 oleg356-server: oleg356-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg356-server: *.lustre-OST0000.recovery_status status: COMPLETE Can't lstat /mnt/lustre/d0.replay-ost-single/f6.replay-ost-single: No such file or directory Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg356-server mds-ost sync done. Waiting for MDT destroys to complete free_before: 7663440 free_after: 7663440 PASS 6 (47s) == replay-ost-single test 7: Fail OST before obd_destroy ========================================================== 17:05:55 (1713387955) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg356-server mds-ost sync done. Waiting for MDT destroys to complete 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.207138 s, 25.3 MB/s before: 7663440 after_dd: 7658320 took 2 seconds UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 6524 3595328 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1388 3605632 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 7912 7200960 1% /mnt/lustre Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec affected facets: ost1 oleg356-server: oleg356-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg356-server: *.lustre-OST0000.recovery_status status: COMPLETE Can't lstat /mnt/lustre/d0.replay-ost-single/f7.replay-ost-single: No such file or directory Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg356-server mds-ost sync done. Waiting for MDT destroys to complete before: 7663440 after: 7663440 PASS 7 (41s) == replay-ost-single test 8a: Verify redo io: redo io when get -EINPROGRESS error ========================================================== 17:06:36 (1713387996) 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.0508841 s, 103 MB/s fail_loc=0x230 fail_loc=0 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 25.2147 s, 208 kB/s PASS 8a (27s) == replay-ost-single test 8b: Verify redo io: redo io should success after recovery ========================================================== 17:07:03 (1713388023) 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.0461226 s, 114 MB/s fail_loc=0x230 Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec fail_loc=0 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 45.2192 s, 116 kB/s PASS 8b (47s) == replay-ost-single test 8c: Verify redo io: redo io should fail after eviction ========================================================== 17:07:50 (1713388070) 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.0626652 s, 83.7 MB/s fail_loc=0x230 dd: error writing '/mnt/lustre/d0.replay-ost-single/f8c.replay-ost-single': Input/output error 1+0 records in 0+0 records out 0 bytes (0 B) copied, 21.8821 s, 0.0 kB/s fail_loc=0 /tmp/verify-7447 /mnt/lustre/d0.replay-ost-single/f8c.replay-ost-single differ: byte 1, line 1 PASS 8c (44s) == replay-ost-single test 8d: Verify redo creation on -EINPROGRESS ========================================================== 17:08:34 (1713388114) fail_loc=0x187 fail_loc=0 File: '/mnt/lustre/d0.replay-ost-single/f8d.replay-ost-single' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205306056727 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 17:08:35.000000000 -0400 Modify: 2024-04-17 17:08:35.000000000 -0400 Change: 2024-04-17 17:08:35.000000000 -0400 Birth: - fail_loc=0x187 fail_loc=0 Succeed in opening file "/mnt/lustre/d0.replay-ost-single/f8d.replay-ost-single"(flags=O_RDWR) File: '/mnt/lustre/d0.replay-ost-single/f8d.replay-ost-single' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205306056728 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 17:08:56.000000000 -0400 Modify: 2024-04-17 17:08:56.000000000 -0400 Change: 2024-04-17 17:08:56.000000000 -0400 Birth: - PASS 8d (44s) == replay-ost-single test 8e: Verify that ptlrpc resends request on -EINPROGRESS ========================================================== 17:09:18 (1713388158) fail_loc=0x231 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1404 3605616 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1388 3605632 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 2792 7211248 1% /mnt/lustre PASS 8e (23s) == replay-ost-single test 9: Verify that no req deadline happened during recovery ========================================================== 17:09:41 (1713388181) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0159148 s, 65.9 MB/s UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1404 3605616 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1388 3605632 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 2792 7211248 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00936623 s, 112 MB/s fail_loc=0x00000714 fail_val=20 Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 oleg356-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec fail_loc=0 PASS 9 (63s) == replay-ost-single test 10: conflicting PW & PR locks on a client ========================================================== 17:10:44 (1713388244) 10+0 records in 10+0 records out 5120 bytes (5.1 kB) copied, 0.00344791 s, 1.5 MB/s fail_val=60 fail_loc=0x414 Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 fail_loc=0x32a File: '/mnt/lustre/d0.replay-ost-single/f10.replay-ost-single' Size: 5120 Blocks: 0 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115205306056732 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 17:10:44.000000000 -0400 Modify: 2024-04-17 17:10:44.000000000 -0400 Change: 2024-04-17 17:10:44.000000000 -0400 Birth: - PASS 10 (62s) == replay-ost-single test 12a: glimpse after OST failover to a missing object ========================================================== 17:11:46 (1713388306) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 2436 3604584 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1388 3605632 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3824 7210216 1% /mnt/lustre total: 500 open/close in 1.43 seconds: 349.01 ops/second total: 500 open/close in 1.25 seconds: 400.57 ops/second total: 500 open/close in 1.62 seconds: 309.45 ops/second total: 500 open/close in 1.18 seconds: 423.04 ops/second total: 500 open/close in 0.94 seconds: 529.18 ops/second total: 500 open/close in 0.87 seconds: 574.70 ops/second total: 500 open/close in 1.13 seconds: 442.30 ops/second total: 500 open/close in 0.82 seconds: 610.48 ops/second total: 500 open/close in 1.00 seconds: 498.70 ops/second total: 500 open/close in 1.52 seconds: 328.25 ops/second Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 starting wait for ls -l PASS 12a (55s) == replay-ost-single test 12b: write after OST failover to a missing object ========================================================== 17:12:41 (1713388361) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d12b.replay-ost-single UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2244 1285444 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 2060 1285628 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 2436 3604584 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1388 3605632 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3824 7210216 1% /mnt/lustre total: 500 open/close in 1.02 seconds: 492.58 ops/second total: 500 open/close in 1.54 seconds: 324.35 ops/second total: 500 open/close in 1.59 seconds: 315.30 ops/second total: 500 open/close in 1.01 seconds: 496.11 ops/second total: 500 open/close in 1.31 seconds: 381.27 ops/second total: 500 open/close in 0.98 seconds: 511.31 ops/second total: 500 open/close in 0.99 seconds: 504.20 ops/second total: 500 open/close in 0.92 seconds: 541.62 ops/second total: 500 open/close in 0.85 seconds: 587.86 ops/second total: 500 open/close in 0.87 seconds: 572.44 ops/second fail_loc=0x16e fail_val=10 Failing ost1 on oleg356-server Stopping /mnt/lustre-ost1 (opts:) on oleg356-server reboot facets: ost1 Failover ost1 to oleg356-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 oleg356-server: oleg356-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg356-client: oleg356-server: ssh exited with exit code 1 Started lustre-OST0000 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00317079 s, 1.3 MB/s PASS 12b (63s) == replay-ost-single test complete, duration 724 sec ===== 17:13:44 (1713388424)