[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f5b30-0x000f5b3f] mapped at [ffffffffff200b30] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5950 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1bb7 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1a53 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01A13 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1ac7 00090 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1b57 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1b8f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 309152047 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.359963] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.361538] pid_max: default: 32768 minimum: 301 [ 0.362391] Security Framework initialized [ 0.363246] SELinux: Initializing. [ 0.365247] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.368111] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.369914] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.371131] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.372664] Initializing cgroup subsys memory [ 0.373497] Initializing cgroup subsys devices [ 0.374335] Initializing cgroup subsys freezer [ 0.375154] Initializing cgroup subsys net_cls [ 0.375979] Initializing cgroup subsys blkio [ 0.376723] Initializing cgroup subsys perf_event [ 0.377593] Initializing cgroup subsys hugetlb [ 0.378432] Initializing cgroup subsys pids [ 0.379459] Initializing cgroup subsys net_prio [ 0.380438] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.382581] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.383539] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.384482] tlb_flushall_shift: 6 [ 0.385188] FEATURE SPEC_CTRL Present [ 0.385839] FEATURE IBPB_SUPPORT Present [ 0.386531] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.388074] Spectre V2 : Vulnerable [ 0.388664] Speculative Store Bypass: Vulnerable [ 0.390224] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.395888] ACPI: Core revision 20130517 [ 0.397971] ACPI: All ACPI Tables successfully acquired [ 0.399099] ftrace: allocating 30294 entries in 119 pages [ 0.444926] Enabling x2apic [ 0.445508] Enabled x2apic [ 0.446341] Switched APIC routing to physical x2apic. [ 0.448662] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.449897] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.451902] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.453516] ... version: 2 [ 0.454247] ... bit width: 48 [ 0.455000] ... generic registers: 4 [ 0.455712] ... value mask: 0000ffffffffffff [ 0.456705] ... max period: 00007fffffffffff [ 0.457690] ... fixed-purpose events: 3 [ 0.458421] ... event mask: 000000070000000f [ 0.459450] KVM setup paravirtual spinlock [ 0.462037] smpboot: Booting Node 0, Processors #1[ 0.463201] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.466319] KVM setup async PF for cpu 1 [ 0.467131] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.468866] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.471288] KVM setup async PF for cpu 2 [ 0.472005] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.473350] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.474961] Brought up 4 CPUs [ 0.474976] KVM setup async PF for cpu 3 [ 0.474982] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.477595] smpboot: Max logical packages: 1 [ 0.478531] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.481614] devtmpfs: initialized [ 0.482362] x86/mm: Memory block size: 128MB [ 0.485926] EVM: security.selinux [ 0.486610] EVM: security.ima [ 0.487228] EVM: security.capability [ 0.489577] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.491147] NET: Registered protocol family 16 [ 0.492154] cpuidle: using governor haltpoll [ 0.493319] ACPI: bus type PCI registered [ 0.494195] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.495647] PCI: Using configuration type 1 for base access [ 0.496876] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.503743] ACPI: Added _OSI(Module Device) [ 0.504607] ACPI: Added _OSI(Processor Device) [ 0.505537] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.506539] ACPI: Added _OSI(Processor Aggregator Device) [ 0.507687] ACPI: Added _OSI(Linux-Dell-Video) [ 0.511396] ACPI: Interpreter enabled [ 0.512261] ACPI: (supports S0 S3 S4 S5) [ 0.513078] ACPI: Using IOAPIC for interrupt routing [ 0.514040] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.515938] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.521462] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.522773] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.524180] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.525478] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.528275] acpiphp: Slot [2] registered [ 0.529143] acpiphp: Slot [3] registered [ 0.529880] acpiphp: Slot [4] registered [ 0.530593] acpiphp: Slot [5] registered [ 0.531282] acpiphp: Slot [6] registered [ 0.531996] acpiphp: Slot [7] registered [ 0.532715] acpiphp: Slot [8] registered [ 0.533448] acpiphp: Slot [9] registered [ 0.534148] acpiphp: Slot [10] registered [ 0.534905] acpiphp: Slot [11] registered [ 0.535678] acpiphp: Slot [12] registered [ 0.536565] acpiphp: Slot [13] registered [ 0.537424] acpiphp: Slot [14] registered [ 0.538335] acpiphp: Slot [15] registered [ 0.539221] acpiphp: Slot [16] registered [ 0.540161] acpiphp: Slot [17] registered [ 0.541054] acpiphp: Slot [18] registered [ 0.541851] acpiphp: Slot [19] registered [ 0.542880] acpiphp: Slot [20] registered [ 0.543797] acpiphp: Slot [21] registered [ 0.544660] acpiphp: Slot [22] registered [ 0.545370] acpiphp: Slot [23] registered [ 0.546124] acpiphp: Slot [24] registered [ 0.546865] acpiphp: Slot [25] registered [ 0.547625] acpiphp: Slot [26] registered [ 0.548419] acpiphp: Slot [27] registered [ 0.549167] acpiphp: Slot [28] registered [ 0.549878] acpiphp: Slot [29] registered [ 0.550672] acpiphp: Slot [30] registered [ 0.551428] acpiphp: Slot [31] registered [ 0.552183] PCI host bridge to bus 0000:00 [ 0.552887] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.554025] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.555231] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.556518] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.557905] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window] [ 0.559239] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.567660] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.568917] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.570018] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.571823] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.573587] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.574826] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.698327] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.699639] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.700986] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.702307] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.703526] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.705849] vgaarb: loaded [ 0.706519] SCSI subsystem initialized [ 0.707301] ACPI: bus type USB registered [ 0.708147] usbcore: registered new interface driver usbfs [ 0.709178] usbcore: registered new interface driver hub [ 0.710225] usbcore: registered new device driver usb [ 0.711444] PCI: Using ACPI for IRQ routing [ 0.712724] NetLabel: Initializing [ 0.713415] NetLabel: domain hash size = 128 [ 0.714268] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.715210] NetLabel: unlabeled traffic allowed by default [ 0.716413] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.717573] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.721899] amd_nb: Cannot enumerate AMD northbridges [ 0.722903] Switched to clocksource kvm-clock [ 0.735422] pnp: PnP ACPI init [ 0.736106] ACPI: bus type PNP registered [ 0.737605] pnp: PnP ACPI: found 6 devices [ 0.738559] ACPI: bus type PNP unregistered [ 0.748779] NET: Registered protocol family 2 [ 0.749997] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 0.751712] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 0.754113] TCP: Hash tables configured (established 32768 bind 32768) [ 0.755423] TCP: reno registered [ 0.756384] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 0.758180] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 0.760048] NET: Registered protocol family 1 [ 0.762332] RPC: Registered named UNIX socket transport module. [ 0.763557] RPC: Registered udp transport module. [ 0.764454] RPC: Registered tcp transport module. [ 0.765473] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 0.766798] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.767977] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.769180] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.771237] Unpacking initramfs... [ 1.972680] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 1.975225] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 1.976468] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 1.978395] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 1.980140] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 1.981233] RAPL PMU: hw unit of domain package 2^-0 Joules [ 1.982278] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 1.984788] cryptomgr_test (51) used greatest stack depth: 14128 bytes left [ 1.985189] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 1.985235] Initialise system trusted keyring [ 2.016354] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.017651] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.022858] zpool: loaded [ 2.023847] zbud: loaded [ 2.024820] VFS: Disk quotas dquot_6.6.0 [ 2.025658] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.028119] NFS: Registering the id_resolver key type [ 2.029196] Key type id_resolver registered [ 2.030016] Key type id_legacy registered [ 2.030770] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.032521] Key type big_key registered [ 2.034485] cryptomgr_test (57) used greatest stack depth: 14048 bytes left [ 2.037043] cryptomgr_test (58) used greatest stack depth: 13968 bytes left [ 2.039232] cryptomgr_test (60) used greatest stack depth: 13664 bytes left [ 2.039718] NET: Registered protocol family 38 [ 2.039742] Key type asymmetric registered [ 2.039746] Asymmetric key parser 'x509' registered [ 2.039895] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.040005] io scheduler noop registered [ 2.040011] io scheduler deadline registered (default) [ 2.040101] io scheduler cfq registered [ 2.040107] io scheduler mq-deadline registered [ 2.040112] io scheduler kyber registered [ 2.043489] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.043502] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.062548] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.064825] ACPI: Power Button [PWRF] [ 2.066606] GHES: HEST is not enabled! [ 2.140855] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.215169] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.323489] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 2.378443] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 2.489480] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 2.516934] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.547328] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.550677] Non-volatile memory driver v1.3 [ 2.551612] Linux agpgart interface v0.103 [ 2.553512] crash memory driver: version 1.1 [ 2.555826] nbd: registered device at major 43 [ 2.568688] virtio_blk virtio1: [vda] 60784 512-byte logical blocks (31.1 MB/29.6 MiB) [ 2.578938] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 2.588906] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.600693] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.612085] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 2.623480] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 2.627730] rdac: device handler registered [ 2.628805] hp_sw: device handler registered [ 2.629658] emc: device handler registered [ 2.630618] libphy: Fixed MDIO Bus: probed [ 2.633654] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.636103] ehci-pci: EHCI PCI platform driver [ 2.637149] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.638392] ohci-pci: OHCI PCI platform driver [ 2.639347] uhci_hcd: USB Universal Host Controller Interface driver [ 2.640774] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 2.643398] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.644504] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.645865] mousedev: PS/2 mouse device common for all mice [ 2.647640] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 2.648550] rtc_cmos 00:05: RTC can wake from S4 [ 2.649843] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 2.650426] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 2.656284] hidraw: raw HID events driver (C) Jiri Kosina [ 2.656598] usbcore: registered new interface driver usbhid [ 2.656599] usbhid: USB HID core driver [ 2.656675] drop_monitor: Initializing network drop monitor service [ 2.656753] Netfilter messages via NETLINK v0.30. [ 2.656847] TCP: cubic registered [ 2.656855] Initializing XFRM netlink socket [ 2.657221] NET: Registered protocol family 10 [ 2.658388] NET: Registered protocol family 17 [ 2.658461] Key type dns_resolver registered [ 2.658965] mce: Using 10 MCE banks [ 2.659284] Loading compiled-in X.509 certificates [ 2.660579] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 2.660618] registered taskstats version 1 [ 2.663640] modprobe (71) used greatest stack depth: 13456 bytes left [ 2.665778] modprobe (73) used greatest stack depth: 13376 bytes left [ 2.666216] Key type trusted registered [ 2.670938] Key type encrypted registered [ 2.670987] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 2.672952] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 2.673643] rtc_cmos 00:05: setting system clock to 2024-04-17 21:00:10 UTC (1713387610) [ 2.702749] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 2.705528] Write protecting the kernel read-only data: 12288k [ 2.708103] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 2.710814] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 2.720493] random: systemd: uninitialized urandom read (16 bytes read) [ 2.724070] random: systemd: uninitialized urandom read (16 bytes read) [ 2.726428] random: systemd: uninitialized urandom read (16 bytes read) [ 2.730839] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 2.737284] systemd[1]: Detected virtualization kvm. [ 2.739061] systemd[1]: Detected architecture x86-64. [ 2.740764] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 2.745683] systemd[1]: No hostname configured. [ 2.747279] systemd[1]: Set hostname to . [ 2.749251] random: systemd: uninitialized urandom read (16 bytes read) [ 2.751371] systemd[1]: Initializing machine ID from random generator. [ 2.801274] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 2.804132] random: systemd: uninitialized urandom read (16 bytes read) [ 2.805933] random: systemd: uninitialized urandom read (16 bytes read) [ 2.807688] random: systemd: uninitialized urandom read (16 bytes read) [ 2.809225] random: systemd: uninitialized urandom read (16 bytes read) [ 2.811764] random: systemd: uninitialized urandom read (16 bytes read) [ 2.813190] random: systemd: uninitialized urandom read (16 bytes read) [ 2.821419] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 2.825072] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 2.828322] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 2.831413] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 2.835012] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 2.837839] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 2.841092] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 2.845515] systemd[1]: Starting Journal Service... Starting Journal Service... [ 2.849329] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 2.857231] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 2.863641] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 2.869400] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 2.873552] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 2.877169] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 2.881274] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 2.885691] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ OK ] Started Setup Virtual Console. [ OK ] Started Load Kernel Modules. Starting Apply Kernel Variables... Starting Create Static Device Nodes in /dev... [ OK ] Started Apply Kernel Variables. [ OK ] Started Create Static Device Nodes in /dev. [ 2.985975] tsc: Refined TSC clocksource calibration: 2399.955 MHz [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook...[ 3.087484] random: fast init done [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ OK ] Started udev Coldplug all Devices. Starting dracut initqueue hook... Starting Show Plymouth [ 3.386931] scsi host0: ata_piix Boot Screen... [ 3.389122] scsi host1: ata_piix [ 3.391248] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 3.393724] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Reached target System Initialization. [ OK ] Started Show Plymouth Boot Screen. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. %G[ 3.474878] ip (320) used greatest stack depth: 13080 bytes left [ 3.517630] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 3.519143] ip (343) used greatest stack depth: 12464 bytes left [ 3.522096] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ 3.633930] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 5.752985] dracut-initqueue[272]: RTNETLINK answers: File exists [ 5.918590] dracut-initqueue[272]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... Mounting /sysroot... [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. [ 6.477467] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... Starting Plymouth switch root service... [ OK ] Stopped target Timers. [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target Slices. [ OK ] Stopped target Sockets. [ OK ] Stopped target Paths. [ OK ] Stopped target System Initialization. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped target Local File Systems. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Swap. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Stopped udev Kernel Device Manager. [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Control Socket. [ OK ] Closed udev Kernel Socket. Starting Cleanup udevd DB... [ OK ] Started Cleanup udevd DB. [ OK ] Started Plymouth switch root service. [ OK ] Reached target Switch Root. Starting Switch Root... [ 6.853248] systemd-journald[100]: Received SIGTERM from PID 1 (systemd). [ 7.057146] SELinux: Disabled at runtime. [ 7.121998] ip_tables: (C) 2000-2006 Netfilter Core Team [ 7.125031] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Created slice system-getty.slice. [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd Root File System. [ OK ] Stopped target Initrd File Systems. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. Mounting POSIX Message Queue File System... [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Listening on udev Kernel Socket. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Reached target rpc_pipefs.target. [ OK ] Reached target Local Encrypted Volumes. Starting Read and set NIS domainname from /etc/sysconfig/network... Mounting Huge Pages File System... [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. Starting Remount Root and Kernel File Systems... Starting Set Up Additional Binary Formats... Starting Load Kernel Modules... [ OK ] Listening on udev Control Socket. Starting udev Coldplug all Devices... Starting Create list of required st... nodes for the current kernel... Mounting Debug File System... [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Load Kernel Modules. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ OK ] Mounted Debug File System. Mounting Arbitrary Executable File Formats File System... Starting Create Static Device Nodes in /dev... Starting Apply Kernel Variables... [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Flush Journal to Persistent Storage... Starting Configure read-only root support... [ OK ] Started Set Up Additional Binary Formats. [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... [ OK ] Mounted /mnt. [ 7.475761] systemd-journald[564]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Coldplug all Devices. [ OK ] Started udev Kernel Device Manager. [ 7.598052] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 7.634843] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ 7.666856] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/vda. [ 7.689172] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ 7.701408] AVX version of gcm_enc/dec engaged. [ 7.702772] AES CTR mode by8 optimization enabled Mounting /home/green/git/lustre-release... [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 7.745186] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ OK ] Mounted /home/green/git/lustre-release. [ 7.757297] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 7.759393] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) %G[ 7.855261] EDAC MC: Ver: 3.0.0 [ 7.861603] EDAC sbridge: Ver: 1.1.2 [ 9.534207] mount.nfs (766) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Preprocess NFS configuration... Starting Rebuild Journal Catalog... Starting Mark the need to relabel after reboot... Starting Create Volatile Files and Directories... Starting Tell Plymouth To Write Out Runtime Data... Starting Load/Save Random Seed... [ OK ] Started Preprocess NFS configuration. [ OK ] Started Mark the need to relabel after reboot. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. [ OK ] Started Load/Save Random Seed. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. Starting Update is Completed... Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Update is Completed. [ OK ] Started Tell Plymouth To Write Out Runtime Data. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting Dump dmesg to /var/log/dmesg... [ OK ] Started D-Bus System Message Bus. Starting GSSAPI Proxy Daemon... Starting Login Service... Starting Network Manager... [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... Starting Network Manager Wait Online... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Wait for Plymouth Boot Screen to Quit... Starting Terminate Plymouth Boot Screen... [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg428-server login: [ 22.109505] libcfs: loading out-of-tree module taints kernel. [ 22.111576] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 22.128970] LNet: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 22.134624] alg: No test for adler32 (adler32-zlib) [ 22.906802] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_hostid [ 27.918838] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing load_modules_local [ 28.208432] Lustre: Lustre: Build Version: 2.15.4_18_gdb82e3b [ 28.364699] LNet: Added LNI 192.168.204.128@tcp [8/256/0/180] [ 28.365935] LNet: Accept secure, port 988 [ 29.909992] Key type lgssc registered [ 30.164830] Lustre: Echo OBD driver; http://www.lustre.org/ [ 30.566723] icp: module license 'CDDL' taints kernel. [ 30.568342] Disabling lock debugging due to kernel taint [ 33.069997] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 34.986672] vdc: vdc1 vdc9 [ 37.205389] vde: vde1 vde9 [ 39.753571] vdf: vdf1 vdf9 [ 43.586158] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing load_modules_local [ 46.284069] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 46.343955] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space: rc = -61 [ 46.380533] Lustre: lustre-MDT0000: new disk, initializing [ 46.431489] random: crng init done [ 46.452306] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 46.471340] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 46.532312] mount.lustre (6665) used greatest stack depth: 10112 bytes left [ 47.391527] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 50.559700] Lustre: lustre-OST0000: new disk, initializing [ 50.561607] Lustre: srv-lustre-OST0000: No data found on store. Initialize space: rc = -61 [ 50.563891] Lustre: Skipped 1 previous similar message [ 50.577339] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 51.520927] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 54.838906] Lustre: lustre-OST0001: new disk, initializing [ 54.840984] Lustre: srv-lustre-OST0001: No data found on store. Initialize space: rc = -61 [ 54.858309] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 55.742328] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 60.834382] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 68.191683] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 73.939840] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing check_logdir /tmp/testlogs/ [ 74.895297] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing yml_node [ 75.794990] Lustre: DEBUG MARKER: Client: 2.15.4.18 [ 76.371451] Lustre: DEBUG MARKER: MDS: 2.15.4.18 [ 76.910909] Lustre: DEBUG MARKER: OSS: 2.15.4.18 [ 77.269857] Lustre: DEBUG MARKER: -----============= acceptance-small: replay-single ============----- Wed Apr 17 17:01:25 EDT 2024 [ 78.641852] Lustre: DEBUG MARKER: excepting tests: 110f 131b 59 [ 80.425604] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing check_config_client /mnt/lustre [ 85.602081] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 86.369717] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 87.040911] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 87.422388] Lustre: DEBUG MARKER: == replay-single test 0a: empty replay =================== 17:01:35 (1713387695) [ 88.119733] LustreError: 14672:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 88.339794] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 88.828613] Lustre: Failing over lustre-MDT0000 [ 88.859507] LustreError: 14865:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 88.926087] Lustre: server umount lustre-MDT0000 complete [ 99.853935] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387700/real 1713387700] req@ffff880131f92140 x1796617153822336/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387707 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 99.861976] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 99.861977] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 99.868879] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 104.861993] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387705/real 1713387705] req@ffff880131f92ac0 x1796617153822464/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387712 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 104.867940] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 105.872473] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a52286 to 0x975df49111a526d8 [ 105.875864] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 105.979681] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 106.758220] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 106.773008] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 106.850313] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 106.958012] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387707/real 1713387707] req@ffff88012f875f00 x1796617153822720/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387714 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 109.871658] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 110.238657] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 112.861304] Lustre: DEBUG MARKER: == replay-single test 0b: ensure object created after recover exists. (3284) ========================================================== 17:02:00 (1713387720) [ 113.371976] Lustre: Failing over lustre-OST0000 [ 113.375490] LustreError: 16959:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 113.379143] LustreError: 16959:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 2 previous similar messages [ 113.390381] Lustre: server umount lustre-OST0000 complete [ 115.974229] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 115.978369] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 115.984333] Lustre: Skipped 1 previous similar message [ 115.986846] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 116.787487] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 116.987139] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 121.791198] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 121.796153] LustreError: Skipped 1 previous similar message [ 125.206605] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 126.115007] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 126.523804] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 126.623849] Lustre: lustre-OST0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 126.627963] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.204.128@tcp (at 0@lo) [ 126.632440] Lustre: lustre-OST0000: deleting orphan objects from 0x0:34 to 0x0:65 [ 129.151411] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 129.504762] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 132.127847] Lustre: DEBUG MARKER: == replay-single test 0c: check replay-barrier =========== 17:02:19 (1713387739) [ 132.842425] LustreError: 18951:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 133.076509] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 133.594257] Lustre: Failing over lustre-MDT0000 [ 133.632403] LustreError: 19164:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 133.704960] Lustre: server umount lustre-MDT0000 complete [ 142.222027] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387743/real 1713387743] req@ffff880131f93440 x1796617153831616/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387750 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 142.230057] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 142.230361] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 142.235950] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 147.230050] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387748/real 1713387748] req@ffff880131f92140 x1796617153831808/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387755 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 147.236833] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 148.238486] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a526d8 to 0x975df49111a52fa6 [ 148.242208] Lustre: MGC192.168.204.128@tcp: Connection restored to 192.168.204.128@tcp (at 0@lo) [ 148.393620] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 148.410440] mount.lustre (19869) used greatest stack depth: 9936 bytes left [ 149.302241] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 149.852308] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 149.854478] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 154.863093] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:54 [ 159.367270] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 159.871049] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:49 [ 164.879108] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:44 [ 169.887093] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:39 [ 179.903021] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:29 [ 179.907334] Lustre: Skipped 1 previous similar message [ 199.935288] Lustre: lustre-MDT0000: Denying connection for new client 6f723ed8-1524-4908-979b-261657ea1843 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:09 [ 199.939385] Lustre: Skipped 3 previous similar messages [ 209.471006] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 209.474212] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 209.483060] Lustre: lustre-MDT0000: Recovery over after 1:00, of 1 clients 0 recovered and 1 was evicted. [ 209.500351] Lustre: lustre-OST0001: deleting orphan objects from 0x0:44 to 0x0:65 [ 209.500353] Lustre: lustre-OST0000: deleting orphan objects from 0x0:76 to 0x0:97 [ 212.546200] Lustre: DEBUG MARKER: == replay-single test 0d: expired recovery with no clients ========================================================== 17:03:40 (1713387820) [ 213.251474] LustreError: 21147:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 213.473474] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 213.971251] Lustre: Failing over lustre-MDT0000 [ 214.001626] LustreError: 21489:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 214.003916] LustreError: 21489:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 6 previous similar messages [ 214.079114] Lustre: server umount lustre-MDT0000 complete [ 221.454054] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387822/real 1713387822] req@ffff880094730980 x1796617153842560/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387829 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 221.461224] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 221.461987] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 221.461988] Lustre: Skipped 1 previous similar message [ 225.876731] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 225.881177] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a52fa6 to 0x975df49111a533b2 [ 225.884507] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 225.887503] Lustre: Skipped 1 previous similar message [ 226.016916] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 226.883035] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 227.440064] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 232.447120] Lustre: lustre-MDT0000: Denying connection for new client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:54 [ 232.452496] Lustre: Skipped 2 previous similar messages [ 286.471015] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 286.474020] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 286.483130] Lustre: lustre-MDT0000: Recovery over after 1:00, of 1 clients 0 recovered and 1 was evicted. [ 286.501415] Lustre: lustre-OST0000: deleting orphan objects from 0x0:76 to 0x0:129 [ 286.501798] Lustre: lustre-OST0001: deleting orphan objects from 0x0:44 to 0x0:97 [ 290.179149] Lustre: DEBUG MARKER: == replay-single test 1: simple create =================== 17:04:57 (1713387897) [ 290.950681] LustreError: 23436:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 291.184618] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 291.683205] Lustre: Failing over lustre-MDT0000 [ 291.720550] LustreError: 23654:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 291.723992] LustreError: 23654:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 4 previous similar messages [ 291.793192] Lustre: server umount lustre-MDT0000 complete [ 300.110014] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387900/real 1713387900] req@ffff880131f96880 x1796617153852480/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387907 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 300.117304] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 300.117971] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 300.117972] Lustre: Skipped 1 previous similar message [ 300.118007] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 306.128439] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a533b2 to 0x975df49111a537b7 [ 306.131324] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 306.134287] Lustre: Skipped 2 previous similar messages [ 306.241042] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 307.070407] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 316.591146] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 316.609248] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 316.623439] Lustre: lustre-OST0001: deleting orphan objects from 0x0:44 to 0x0:129 [ 316.623444] Lustre: lustre-OST0000: deleting orphan objects from 0x0:76 to 0x0:161 [ 317.928461] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 318.273582] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 320.877820] Lustre: DEBUG MARKER: == replay-single test 2a: touch ========================== 17:05:28 (1713387928) [ 321.592381] LustreError: 25958:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 321.827162] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 322.323370] Lustre: Failing over lustre-MDT0000 [ 322.354830] LustreError: 26179:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 322.356822] LustreError: 26179:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 3 previous similar messages [ 322.430518] Lustre: server umount lustre-MDT0000 complete [ 333.261933] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713387934/real 1713387934] req@ffff8800a7a017c0 x1796617153858560/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713387941 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 333.270417] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 333.270477] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 333.270662] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 333.270663] Lustre: Skipped 1 previous similar message [ 339.285572] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a537b7 to 0x975df49111a53c1e [ 339.288470] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 339.290220] Lustre: Skipped 2 previous similar messages [ 339.631937] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 339.664880] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 339.684406] Lustre: lustre-OST0001: deleting orphan objects from 0x0:44 to 0x0:161 [ 339.684407] Lustre: lustre-OST0000: deleting orphan objects from 0x0:163 to 0x0:193 [ 340.306834] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 343.331608] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 343.729942] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 346.443620] Lustre: DEBUG MARKER: == replay-single test 2b: touch ========================== 17:05:54 (1713387954) [ 347.226628] LustreError: 28374:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 347.450205] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 347.965673] Lustre: Failing over lustre-MDT0000 [ 347.999458] LustreError: 28591:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 348.002028] LustreError: 28591:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 4 previous similar messages [ 348.077913] Lustre: server umount lustre-MDT0000 complete [ 356.414137] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 362.420819] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a53c1e to 0x975df49111a540af [ 363.468854] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 372.695704] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 372.736787] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 372.769635] Lustre: lustre-OST0000: deleting orphan objects from 0x0:195 to 0x0:225 [ 372.769637] Lustre: lustre-OST0001: deleting orphan objects from 0x0:44 to 0x0:193 [ 374.318257] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 374.742018] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 377.787539] Lustre: DEBUG MARKER: == replay-single test 2c: setstripe replay =============== 17:06:25 (1713387985) [ 378.682863] LustreError: 30931:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 378.914490] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 388.606075] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 388.609994] Lustre: Skipped 1 previous similar message [ 389.611049] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 395.617438] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a540af to 0x975df49111a54547 [ 395.736269] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 395.738050] Lustre: Skipped 2 previous similar messages [ 396.597911] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 405.735989] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 405.739317] Lustre: Skipped 4 previous similar messages [ 405.759725] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 405.804220] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 405.823339] Lustre: lustre-OST0000: deleting orphan objects from 0x0:227 to 0x0:257 [ 405.823341] Lustre: lustre-OST0001: deleting orphan objects from 0x0:195 to 0x0:225 [ 407.423015] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 407.846643] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 410.929722] Lustre: DEBUG MARKER: == replay-single test 2d: setdirstripe replay ============ 17:06:58 (1713388018) [ 411.799870] LustreError: 1065:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 412.075280] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 412.559522] Lustre: Failing over lustre-MDT0000 [ 412.561354] Lustre: Skipped 1 previous similar message [ 412.594498] LustreError: 1274:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 412.596814] LustreError: 1274:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 9 previous similar messages [ 412.679733] Lustre: server umount lustre-MDT0000 complete [ 412.680928] Lustre: Skipped 1 previous similar message [ 422.750046] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713388023/real 1713388023] req@ffff88009c8abdc0 x1796617153876480/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713388030 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 422.755757] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 10 previous similar messages [ 422.757670] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 422.757957] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 422.765178] Lustre: Skipped 2 previous similar messages [ 428.767061] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a54547 to 0x975df49111a549bc [ 428.794167] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 428.800602] LustreError: Skipped 1 previous similar message [ 429.889756] Lustre: lustre-OST0001: deleting orphan objects from 0x0:195 to 0x0:257 [ 429.889759] Lustre: lustre-OST0000: deleting orphan objects from 0x0:227 to 0x0:289 [ 429.889798] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 433.298348] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 433.818787] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 437.177827] Lustre: DEBUG MARKER: == replay-single test 2e: O_CREAT|O_EXCL create replay === 17:07:24 (1713388044) [ 437.514187] Lustre: *** cfs_fail_loc=13b, val=315*** [ 437.516844] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 437.519201] LustreError: 2004:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff880094735580 x1796617149659904/t38654705666(0) o35->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:122/0 lens 392/456 e 0 to 0 dl 1713388062 ref 1 fl Interpret:/0/0 rc 0/0 job:'openfile.0' [ 439.507412] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 459.108407] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 467.863484] Lustre: 4581:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff880094736880 x1796617149659904/t38654705666(0) o35->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:152/0 lens 392/456 e 0 to 0 dl 1713388092 ref 1 fl Interpret:/2/0 rc 0/0 job:'openfile.0' [ 467.875218] Lustre: lustre-OST0000: deleting orphan objects from 0x0:227 to 0x0:321 [ 467.875867] Lustre: lustre-OST0001: deleting orphan objects from 0x0:195 to 0x0:289 [ 469.582903] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 469.944429] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 472.880160] Lustre: DEBUG MARKER: == replay-single test 3a: replay failed open(O_DIRECTORY) ========================================================== 17:08:00 (1713388080) [ 473.977227] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 493.134231] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 501.874327] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 501.877603] Lustre: Skipped 2 previous similar messages [ 501.895215] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 501.898456] Lustre: Skipped 2 previous similar messages [ 501.913799] Lustre: lustre-OST0000: deleting orphan objects from 0x0:227 to 0x0:353 [ 501.913820] Lustre: lustre-OST0001: deleting orphan objects from 0x0:195 to 0x0:321 [ 503.288844] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 503.716067] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 506.530598] Lustre: DEBUG MARKER: == replay-single test 3b: replay failed open -ENOMEM ===== 17:08:34 (1713388114) [ 507.244188] LustreError: 8679:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 507.247060] LustreError: 8679:0:(osd_handler.c:694:osd_ro()) Skipped 2 previous similar messages [ 507.468171] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 507.707770] Lustre: *** cfs_fail_loc=114, val=0*** [ 508.398455] Lustre: Failing over lustre-MDT0000 [ 508.399491] Lustre: Skipped 2 previous similar messages [ 508.425463] LustreError: 9001:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 508.427467] LustreError: 9001:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 13 previous similar messages [ 508.495630] Lustre: server umount lustre-MDT0000 complete [ 508.496970] Lustre: Skipped 2 previous similar messages [ 519.277981] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 519.283285] Lustre: Skipped 4 previous similar messages [ 519.285931] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 519.288804] LustreError: Skipped 2 previous similar messages [ 525.285941] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a5521a to 0x975df49111a55673 [ 525.291011] Lustre: Skipped 2 previous similar messages [ 525.443321] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 525.446490] Lustre: Skipped 3 previous similar messages [ 526.318345] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 535.923488] Lustre: lustre-OST0000: deleting orphan objects from 0x0:227 to 0x0:385 [ 535.924111] Lustre: lustre-OST0001: deleting orphan objects from 0x0:195 to 0x0:353 [ 536.427359] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 536.431331] Lustre: Skipped 12 previous similar messages [ 537.338646] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 537.699293] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 540.225518] Lustre: DEBUG MARKER: == replay-single test 3c: replay failed open -ENOMEM ===== 17:09:07 (1713388147) [ 541.168474] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 541.423857] Lustre: *** cfs_fail_loc=128, val=0*** [ 553.437962] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713388154/real 1713388154] req@ffff880094733dc0 x1796617153900800/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713388161 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 553.448487] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 23 previous similar messages [ 560.534625] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 569.943860] Lustre: lustre-OST0001: deleting orphan objects from 0x0:195 to 0x0:385 [ 569.946731] Lustre: lustre-OST0000: deleting orphan objects from 0x0:227 to 0x0:417 [ 571.202726] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 571.548004] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 574.313333] Lustre: DEBUG MARKER: == replay-single test 4a: |x| 10 open(O_CREAT)s ========== 17:09:42 (1713388182) [ 575.257775] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 588.725798] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 593.031033] Lustre: lustre-OST0000: deleting orphan objects from 0x0:423 to 0x0:449 [ 593.031072] Lustre: lustre-OST0001: deleting orphan objects from 0x0:391 to 0x0:417 [ 594.472732] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 594.868080] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 597.973511] Lustre: DEBUG MARKER: == replay-single test 4b: |x| rm 10 files ================ 17:10:05 (1713388205) [ 599.020637] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 617.058667] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 617.976625] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 627.037665] Lustre: lustre-OST0001: deleting orphan objects from 0x0:423 to 0x0:449 [ 627.039586] Lustre: lustre-OST0000: deleting orphan objects from 0x0:455 to 0x0:481 [ 628.307328] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 628.715425] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 631.314137] Lustre: DEBUG MARKER: == replay-single test 5: |x| 220 open(O_CREAT) =========== 17:10:39 (1713388239) [ 632.227983] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 651.195214] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 652.036214] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 661.269125] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 661.272964] Lustre: Skipped 4 previous similar messages [ 661.941535] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 661.945234] Lustre: Skipped 4 previous similar messages [ 661.964800] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:609 [ 661.964814] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:577 [ 663.485168] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 663.925763] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 671.490025] Lustre: DEBUG MARKER: == replay-single test 6a: mkdir + contained create ======= 17:11:19 (1713388279) [ 672.200285] LustreError: 22325:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 672.203301] LustreError: 22325:0:(osd_handler.c:694:osd_ro()) Skipped 4 previous similar messages [ 672.449675] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 672.972230] Lustre: Failing over lustre-MDT0000 [ 672.974218] Lustre: Skipped 4 previous similar messages [ 673.012845] LustreError: 22528:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 673.014900] LustreError: 22528:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 19 previous similar messages [ 673.081807] Lustre: server umount lustre-MDT0000 complete [ 673.083213] Lustre: Skipped 4 previous similar messages [ 683.214048] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 683.217866] Lustre: Skipped 9 previous similar messages [ 684.218990] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 684.222511] LustreError: Skipped 4 previous similar messages [ 690.224773] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a592ef to 0x975df49111a5f6e7 [ 690.228962] Lustre: Skipped 4 previous similar messages [ 690.359136] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 691.225988] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 700.336184] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:641 [ 700.338036] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:609 [ 701.796394] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 702.333025] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 707.073138] Lustre: DEBUG MARKER: == replay-single test 6b: |X| rmdir ====================== 17:11:54 (1713388314) [ 708.104456] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 723.592832] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 724.871756] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 734.366690] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:673 [ 734.366695] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:641 [ 736.215983] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 736.568731] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 739.239777] Lustre: DEBUG MARKER: == replay-single test 7: mkdir |X| contained create ====== 17:12:26 (1713388346) [ 740.219714] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 757.758049] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 758.925759] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 768.405635] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:673 [ 768.405649] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:705 [ 769.923954] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 770.360663] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 773.259475] Lustre: DEBUG MARKER: == replay-single test 8: creat open |X| close ============ 17:13:01 (1713388381) [ 774.565322] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 791.935779] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 791.937756] Lustre: Skipped 7 previous similar messages [ 791.954079] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 792.864241] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 802.417542] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:737 [ 802.417548] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:705 [ 802.935122] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 802.936689] Lustre: Skipped 22 previous similar messages [ 803.708840] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 804.116419] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 806.731922] Lustre: DEBUG MARKER: == replay-single test 9: |X| create (same inum/gen) ====== 17:13:34 (1713388414) [ 807.673976] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 819.950057] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713388420/real 1713388420] req@ffff880099da1300 x1796617153995328/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713388427 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 819.969077] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 40 previous similar messages [ 826.116017] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 827.373119] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 836.427463] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:737 [ 836.427465] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:769 [ 837.956776] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 838.307043] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 840.923974] Lustre: DEBUG MARKER: == replay-single test 10: create |X| rename unlink ======= 17:14:08 (1713388448) [ 842.112024] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 860.261529] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 861.479132] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 870.484312] Lustre: lustre-OST0000: deleting orphan objects from 0x0:592 to 0x0:801 [ 870.487335] Lustre: lustre-OST0001: deleting orphan objects from 0x0:560 to 0x0:769 [ 872.030480] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 872.428712] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 875.802690] Lustre: DEBUG MARKER: == replay-single test 11: create open write rename |X| create-old-name read ========================================================== 17:14:43 (1713388483) [ 877.179228] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 893.537666] Lustre: lustre-OST0001: deleting orphan objects from 0x0:771 to 0x0:801 [ 893.539408] Lustre: lustre-OST0000: deleting orphan objects from 0x0:803 to 0x0:833 [ 894.577837] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 898.179631] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 898.713641] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 901.972526] Lustre: DEBUG MARKER: == replay-single test 12: open, unlink |X| close ========= 17:15:09 (1713388509) [ 903.264488] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 917.271793] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 921.519818] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 921.524683] Lustre: Skipped 7 previous similar messages [ 921.555206] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 921.560705] Lustre: Skipped 7 previous similar messages [ 921.585447] Lustre: lustre-OST0001: deleting orphan objects from 0x0:771 to 0x0:833 [ 921.586347] Lustre: lustre-OST0000: deleting orphan objects from 0x0:803 to 0x0:865 [ 923.409688] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 923.972467] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 927.447695] Lustre: DEBUG MARKER: == replay-single test 13: open chmod 0 |x| write close === 17:15:35 (1713388535) [ 928.399101] LustreError: 10814:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 928.402025] LustreError: 10814:0:(osd_handler.c:694:osd_ro()) Skipped 7 previous similar messages [ 928.621097] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 929.199374] Lustre: Failing over lustre-MDT0000 [ 929.201599] Lustre: Skipped 7 previous similar messages [ 929.250158] LustreError: 11050:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 929.253639] LustreError: 11050:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 39 previous similar messages [ 929.346485] Lustre: server umount lustre-MDT0000 complete [ 929.347626] Lustre: Skipped 7 previous similar messages [ 943.422605] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 943.425488] Lustre: Skipped 2 previous similar messages [ 944.600967] Lustre: lustre-OST0001: deleting orphan objects from 0x0:835 to 0x0:865 [ 944.600968] Lustre: lustre-OST0000: deleting orphan objects from 0x0:803 to 0x0:897 [ 944.685107] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 948.585828] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 949.156346] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 952.500545] Lustre: DEBUG MARKER: == replay-single test 14: open(O_CREAT), unlink |X| close ========================================================== 17:16:00 (1713388560) [ 953.875779] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 966.398064] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 966.405137] Lustre: Skipped 15 previous similar messages [ 966.406009] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 966.406012] LustreError: Skipped 8 previous similar messages [ 972.419757] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a61d13 to 0x975df49111a621e3 [ 972.426397] Lustre: Skipped 8 previous similar messages [ 973.986032] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 974.662304] Lustre: lustre-OST0000: deleting orphan objects from 0x0:899 to 0x0:929 [ 974.662383] Lustre: lustre-OST0001: deleting orphan objects from 0x0:835 to 0x0:897 [ 977.564615] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 977.910998] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 981.201790] Lustre: DEBUG MARKER: == replay-single test 15: open(O_CREAT), unlink |X| touch new, close ========================================================== 17:16:28 (1713388588) [ 982.613991] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 997.845151] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1000.687096] Lustre: lustre-OST0000: deleting orphan objects from 0x0:931 to 0x0:961 [ 1000.687195] Lustre: lustre-OST0001: deleting orphan objects from 0x0:899 to 0x0:929 [ 1002.572509] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1003.199370] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1006.734979] Lustre: DEBUG MARKER: == replay-single test 16: |X| open(O_CREAT), unlink, touch new, unlink new ========================================================== 17:16:54 (1713388614) [ 1007.979700] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1025.233942] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1034.732553] Lustre: lustre-OST0001: deleting orphan objects from 0x0:899 to 0x0:961 [ 1034.732560] Lustre: lustre-OST0000: deleting orphan objects from 0x0:963 to 0x0:993 [ 1036.627392] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1037.225393] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1040.739472] Lustre: DEBUG MARKER: == replay-single test 17: |X| open(O_CREAT), |replay| close ========================================================== 17:17:28 (1713388648) [ 1042.192675] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1057.737231] Lustre: lustre-OST0000: deleting orphan objects from 0x0:995 to 0x0:1025 [ 1057.737233] Lustre: lustre-OST0001: deleting orphan objects from 0x0:899 to 0x0:993 [ 1058.436840] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1062.419604] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1062.834525] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1066.404228] Lustre: DEBUG MARKER: == replay-single test 18: open(O_CREAT), unlink, touch new, close, touch, unlink ========================================================== 17:17:53 (1713388673) [ 1067.674791] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1086.305790] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1086.307754] Lustre: Skipped 4 previous similar messages [ 1087.455822] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1096.779538] Lustre: lustre-OST0001: deleting orphan objects from 0x0:995 to 0x0:1025 [ 1096.779542] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1027 to 0x0:1057 [ 1098.518550] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1099.036367] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1101.956214] Lustre: DEBUG MARKER: == replay-single test 19: mcreate, open, write, rename === 17:18:29 (1713388709) [ 1103.059496] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1119.812082] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1027 to 0x0:1057 [ 1119.813151] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1059 to 0x0:1089 [ 1120.646112] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1124.123464] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1124.681995] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1127.933120] Lustre: DEBUG MARKER: == replay-single test 20a: |X| open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 17:18:55 (1713388735) [ 1129.329433] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1149.457705] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1158.839795] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1059 to 0x0:1121 [ 1158.839797] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1059 to 0x0:1089 [ 1160.792009] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1161.381872] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1164.709729] Lustre: DEBUG MARKER: == replay-single test 20b: write, unlink, eviction, replay (test mds_cleanup_orphans) ========================================================== 17:19:32 (1713388772) [ 1165.862084] Lustre: 31472:0:(genops.c:1710:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 02946759-878f-452d-9e9a-309fb22e4b75 at adminstrative request [ 1186.941061] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1192.849007] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1059 to 0x0:1121 [ 1192.849017] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1123 to 0x0:1153 [ 1194.837476] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1195.423978] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1197.680960] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 1206.022848] Lustre: DEBUG MARKER: before 6144, after 6144 [ 1208.177731] Lustre: DEBUG MARKER: == replay-single test 20c: check that client eviction does not affect file content ========================================================== 17:20:15 (1713388815) [ 1208.561681] Lustre: 2030:0:(genops.c:1710:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 02946759-878f-452d-9e9a-309fb22e4b75 at adminstrative request [ 1213.215694] Lustre: DEBUG MARKER: == replay-single test 21: |X| open(O_CREAT), unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 17:20:20 (1713388820) [ 1214.424191] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1230.863980] Lustre: lustre-MDT0000: Not available for connect from 192.168.204.28@tcp (not set up) [ 1232.249426] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1156 to 0x0:1185 [ 1232.249723] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1123 to 0x0:1153 [ 1232.274780] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1235.957071] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1236.441841] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1239.945279] Lustre: DEBUG MARKER: == replay-single test 22: open(O_CREAT), |X| unlink, replay, close (test mds_cleanup_orphans) ========================================================== 17:20:47 (1713388847) [ 1241.390225] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1261.403469] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1269.944048] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1187 to 0x0:1217 [ 1269.944079] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1155 to 0x0:1185 [ 1271.953376] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1272.550357] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1275.944860] Lustre: DEBUG MARKER: == replay-single test 23: open(O_CREAT), |X| unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 17:21:23 (1713388883) [ 1277.277482] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1294.686148] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1303.960766] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1187 to 0x0:1217 [ 1303.960774] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1219 to 0x0:1249 [ 1305.935361] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1306.521483] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1309.927926] Lustre: DEBUG MARKER: == replay-single test 24: open(O_CREAT), replay, unlink, close (test mds_cleanup_orphans) ========================================================== 17:21:57 (1713388917) [ 1311.315846] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1327.319631] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 1327.322870] Lustre: Skipped 43 previous similar messages [ 1327.507119] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1327.510942] Lustre: Skipped 16 previous similar messages [ 1328.847110] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1337.969641] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1251 to 0x0:1281 [ 1337.969646] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1219 to 0x0:1249 [ 1339.761400] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1340.261008] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1343.457912] Lustre: DEBUG MARKER: == replay-single test 25: open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 17:22:31 (1713388951) [ 1344.819777] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1354.509998] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713388955/real 1713388955] req@ffff88009aaeda40 x1796617154100096/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713388962 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 1354.527451] Lustre: 2853:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 78 previous similar messages [ 1360.753412] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1360.756424] Lustre: Skipped 7 previous similar messages [ 1361.005408] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1251 to 0x0:1313 [ 1361.005474] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1251 to 0x0:1281 [ 1362.093500] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1366.063128] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1366.632338] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1370.023087] Lustre: DEBUG MARKER: == replay-single test 26: |X| open(O_CREAT), unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 17:22:57 (1713388977) [ 1371.255162] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1391.227613] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1400.067620] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1315 to 0x0:1345 [ 1400.067662] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1283 to 0x0:1313 [ 1401.983329] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1402.536038] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1405.985075] Lustre: DEBUG MARKER: == replay-single test 27: |X| open(O_CREAT), unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 17:23:33 (1713389013) [ 1407.357283] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1426.372683] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1434.040611] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1434.045103] Lustre: Skipped 15 previous similar messages [ 1434.081319] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1434.088008] Lustre: Skipped 15 previous similar messages [ 1434.112061] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1347 to 0x0:1377 [ 1434.112063] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1315 to 0x0:1345 [ 1435.967258] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1436.537419] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1440.042738] Lustre: DEBUG MARKER: == replay-single test 28: open(O_CREAT), |X| unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 17:24:07 (1713389047) [ 1441.144144] LustreError: 20691:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 1441.148813] LustreError: 20691:0:(osd_handler.c:694:osd_ro()) Skipped 14 previous similar messages [ 1441.474399] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1442.223436] Lustre: Failing over lustre-MDT0000 [ 1442.225436] Lustre: Skipped 15 previous similar messages [ 1442.282970] LustreError: 20913:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 1442.285668] LustreError: 20913:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 92 previous similar messages [ 1442.353195] Lustre: server umount lustre-MDT0000 complete [ 1442.354320] Lustre: Skipped 15 previous similar messages [ 1460.226333] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1468.129060] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1379 to 0x0:1409 [ 1468.129063] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1347 to 0x0:1377 [ 1469.817455] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1470.157728] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1472.905178] Lustre: DEBUG MARKER: == replay-single test 29: open(O_CREAT), |X| unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 17:24:40 (1713389080) [ 1474.298593] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1485.262202] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1485.269372] Lustre: Skipped 26 previous similar messages [ 1485.270039] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1485.270041] LustreError: Skipped 15 previous similar messages [ 1491.281823] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a66bef to 0x975df49111a670e2 [ 1491.287107] Lustre: Skipped 15 previous similar messages [ 1492.793001] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1502.197507] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1411 to 0x0:1441 [ 1504.101078] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1504.664051] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1508.115997] Lustre: DEBUG MARKER: == replay-single test 30: open(O_CREAT) two, unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 17:25:15 (1713389115) [ 1509.534203] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1527.020512] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1536.187295] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1444 to 0x0:1473 [ 1536.187304] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1379 to 0x0:1409 [ 1538.042672] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1538.522477] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1542.039310] Lustre: DEBUG MARKER: == replay-single test 31: open(O_CREAT) two, unlink one, |X| unlink one, close two (test mds_cleanup_orphans) ========================================================== 17:25:49 (1713389149) [ 1543.458022] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1560.807044] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1570.236126] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1475 to 0x0:1505 [ 1570.236140] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1411 to 0x0:1441 [ 1571.968620] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1572.294515] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1575.798294] Lustre: DEBUG MARKER: == replay-single test 32: close() notices client eviction; close() after client eviction ========================================================== 17:26:23 (1713389183) [ 1576.184406] Lustre: 30909:0:(genops.c:1710:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 02946759-878f-452d-9e9a-309fb22e4b75 at adminstrative request [ 1580.810482] Lustre: DEBUG MARKER: == replay-single test 33a: fid seq shouldn't be reused after abort recovery ========================================================== 17:26:28 (1713389188) [ 1581.848484] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1585.564117] LustreError: 32234:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 1585.567856] LustreError: 32234:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1585.570170] Lustre: 32407:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1585.572501] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1585.592993] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1447 to 0x0:1473 [ 1585.595020] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1512 to 0x0:1537 [ 1586.821640] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1592.580261] Lustre: DEBUG MARKER: == replay-single test 33b: test fid seq allocation ======= 17:26:40 (1713389200) [ 1593.938697] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1597.614208] Lustre: *** cfs_fail_loc=1311, val=0*** [ 1597.622217] LustreError: 1993:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 1597.624418] LustreError: 1993:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1597.626320] Lustre: 2220:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1597.628782] Lustre: 2220:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 1597.630905] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1597.651993] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1548 to 0x0:1569 [ 1597.654127] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1484 to 0x0:1505 [ 1598.895977] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1602.282254] Lustre: *** cfs_fail_loc=1311, val=0*** [ 1604.578228] Lustre: DEBUG MARKER: == replay-single test 34: abort recovery before client does replay (test mds_cleanup_orphans) ========================================================== 17:26:52 (1713389212) [ 1605.992028] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1609.779406] LustreError: 4275:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 1609.782429] LustreError: 4275:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1609.785313] Lustre: 4523:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1609.788229] Lustre: 4523:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 1609.790628] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1609.816636] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1512 to 0x0:1537 [ 1609.816649] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1575 to 0x0:1601 [ 1611.144624] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1616.680191] Lustre: DEBUG MARKER: == replay-single test 35: test recovery from llog for unlink op ========================================================== 17:27:04 (1713389224) [ 1617.040242] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1617.041904] LustreError: 4286:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff88012bf1cc00 x1796617150108416/t201863462916(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:535/0 lens 504/456 e 0 to 0 dl 1713389230 ref 1 fl Interpret:/0/0 rc 0/0 job:'rm.0' [ 1622.609337] LustreError: 6286:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 1622.613119] LustreError: 6286:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1622.615571] Lustre: 6431:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1622.617898] Lustre: 6431:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 1622.619616] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1622.639868] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1575 to 0x0:1633 [ 1622.639877] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1539 to 0x0:1569 [ 1623.868288] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1629.120038] Lustre: DEBUG MARKER: == replay-single test 36: don't resend cancel ============ 17:27:16 (1713389236) [ 1630.485144] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1647.127349] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1575 to 0x0:1665 [ 1647.127880] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1571 to 0x0:1601 [ 1647.140449] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1650.090492] Lustre: DEBUG MARKER: == replay-single test 37: abort recovery before client does replay (test mds_cleanup_orphans for directories) ========================================================== 17:27:37 (1713389257) [ 1651.226430] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1655.247691] LustreError: 10761:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 1655.249821] LustreError: 10761:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1655.251911] Lustre: 10900:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1655.257031] Lustre: 10900:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 1655.261281] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1655.296257] Lustre: lustre-OST0000: deleting orphan objects from 0x0:1575 to 0x0:1697 [ 1655.296259] Lustre: lustre-OST0001: deleting orphan objects from 0x0:1571 to 0x0:1633 [ 1656.625182] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1662.357190] Lustre: DEBUG MARKER: == replay-single test 38: test recovery from unlink llog (test llog_gen_rec) ========================================================== 17:27:49 (1713389269) [ 1670.088528] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1690.684818] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1696.172498] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2034 to 0x0:2049 [ 1696.172501] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2098 to 0x0:2113 [ 1698.226517] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1698.830917] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1707.149969] Lustre: DEBUG MARKER: == replay-single test 39: test recovery from unlink llog (test llog_gen_rec) ========================================================== 17:28:34 (1713389314) [ 1712.954174] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1734.961061] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1741.172544] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2450 to 0x0:2465 [ 1741.172547] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2514 to 0x0:2529 [ 1743.072828] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1743.636605] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1751.731140] Lustre: DEBUG MARKER: == replay-single test 40: cause recovery in ptlrpc, ensure IO continues ========================================================== 17:29:19 (1713389359) [ 1752.108823] Lustre: DEBUG MARKER: SKIP: replay-single test_40 layout_lock needs MDS connection for IO [ 1752.650691] Lustre: DEBUG MARKER: == replay-single test 41: read from a valid osc while other oscs are invalid ========================================================== 17:29:20 (1713389360) [ 1753.367617] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 1753.710241] Lustre: lustre-OST0001: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 1753.714251] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 1753.721234] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2450 to 0x0:2497 [ 1755.725449] Lustre: DEBUG MARKER: == replay-single test 42: recovery after ost failure ===== 17:29:23 (1713389363) [ 1761.410736] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 1764.678829] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1765.218321] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1769.686567] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1774.694641] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1774.705924] LustreError: Skipped 1 previous similar message [ 1778.099293] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1778.792726] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2931 to 0x0:2977 [ 1823.280861] Lustre: DEBUG MARKER: == replay-single test 43: mds osc import failure during recovery; don't LBUG ========================================================== 17:30:30 (1713389430) [ 1824.731849] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1841.544840] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1843.374445] Lustre: *** cfs_fail_loc=204, val=2147483648*** [ 1843.374477] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2931 to 0x0:3009 [ 1845.562362] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1846.131059] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1850.374017] LustreError: 24783:0:(osp_precreate.c:967:osp_precreate_cleanup_orphans()) lustre-OST0001-osc-MDT0000: cannot cleanup orphans: rc = -11 [ 1850.374335] Lustre: lustre-OST0001: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 1851.379385] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2898 to 0x0:2913 [ 1859.683111] Lustre: DEBUG MARKER: == replay-single test 44a: race in target handle connect ========================================================== 17:31:07 (1713389467) [ 1861.365132] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1866.367970] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1866.371073] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 1866.909670] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1871.913011] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1871.917193] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 1872.610484] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1877.614093] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1877.618050] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 1878.294646] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1883.298059] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1884.000050] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1889.005006] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1889.008944] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 1889.013473] Lustre: Skipped 1 previous similar message [ 1895.371593] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1895.374925] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) Skipped 1 previous similar message [ 1900.377990] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1900.383092] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) Skipped 1 previous similar message [ 1906.058204] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 1906.065010] Lustre: Skipped 2 previous similar messages [ 1912.421185] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) cfs_race id 701 sleeping [ 1912.424788] LustreError: 24745:0:(libcfs_fail.h:169:cfs_race()) Skipped 2 previous similar messages [ 1917.427994] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) cfs_fail_race id 701 awake: rc=0 [ 1917.431844] LustreError: 24745:0:(libcfs_fail.h:178:cfs_race()) Skipped 2 previous similar messages [ 1920.210012] Lustre: DEBUG MARKER: == replay-single test 44b: race in target handle connect ========================================================== 17:32:07 (1713389527) [ 1920.861954] LustreError: 24745:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1926.861662] Lustre: lustre-MDT0000: Export ffff88009f3ad000 already connecting from 192.168.204.28@tcp [ 1931.871186] Lustre: lustre-MDT0000: Export ffff88009f3ad000 already connecting from 192.168.204.28@tcp [ 1936.879671] Lustre: lustre-MDT0000: Export ffff88009f3ad000 already connecting from 192.168.204.28@tcp [ 1941.887707] Lustre: lustre-MDT0000: Export ffff88009f3ad000 already connecting from 192.168.204.28@tcp [ 1946.895300] Lustre: lustre-MDT0000: Export ffff88009f3ad000 already connecting from 192.168.204.28@tcp [ 1956.911327] Lustre: lustre-MDT0000: Export ffff88009f3ad000 already connecting from 192.168.204.28@tcp [ 1956.915612] Lustre: Skipped 1 previous similar message [ 1960.867033] LustreError: 24745:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 704 awake [ 1960.871394] Lustre: 24745:0:(service.c:2333:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff8800a6d23900 x1796617151114560/t0(0) o38->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:0/0 lens 520/416 e 0 to 0 dl 1713389548 ref 1 fl Complete:H/0/0 rc 0/0 job:'lctl.0' [ 1961.919318] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 1961.924078] Lustre: Skipped 3 previous similar messages [ 1962.530115] LustreError: 24745:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1973.233991] LustreError: 24745:0:(fail.c:144:__cfs_fail_timeout_set()) cfs_fail_timeout interrupted [ 1974.779983] Lustre: DEBUG MARKER: == replay-single test 44c: race in target handle connect ========================================================== 17:33:02 (1713389582) [ 1976.212684] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1979.872442] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 1979.875645] Lustre: Skipped 51 previous similar messages [ 1980.011415] Lustre: *** cfs_fail_loc=712, val=0*** [ 1980.014477] LustreError: 19911:0:(service.c:1226:ptlrpc_check_req()) @@@ Invalid replay without recovery req@ffff88012bf1d0c0 x1796617154438016/t0(0) o400->lustre-MDT0000-mdtlov_UUID@0@lo:0/0 lens 224/0 e 0 to 0 dl 0 ref 1 fl New:/c0/ffffffff rc 0/-1 job:'ptlrpcd_rcv.0' [ 1980.026603] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1980.077522] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1980.081190] Lustre: Skipped 17 previous similar messages [ 1980.106836] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1980.106991] LustreError: 30991:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 1980.106995] LustreError: 30991:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1980.120063] Lustre: Skipped 26 previous similar messages [ 1980.122518] Lustre: 31123:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1980.127563] Lustre: 31123:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 1980.132431] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1980.171981] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2931 to 0x0:3041 [ 1980.172036] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2898 to 0x0:2945 [ 1981.475915] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 1997.054027] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713389597/real 1713389597] req@ffff8801334c9300 x1796617154442368/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713389604 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 1997.067223] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 72 previous similar messages [ 2004.486550] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2012.629588] Lustre: lustre-OST0000: deleting orphan objects from 0x0:2931 to 0x0:3073 [ 2012.629606] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2898 to 0x0:2977 [ 2014.346234] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2014.691558] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2018.087059] Lustre: DEBUG MARKER: == replay-single test 45: Handle failed close ============ 17:33:45 (1713389625) [ 2021.172191] Lustre: DEBUG MARKER: == replay-single test 46: Don't leak file handle after open resend (3325) ========================================================== 17:33:48 (1713389628) [ 2021.394100] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 2021.395658] LustreError: 32686:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff8800a1fd0980 x1796617151137280/t0(0) o700->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:185/0 lens 264/248 e 0 to 0 dl 1713389635 ref 1 fl Interpret:/0/0 rc 0/0 job:'touch.0' [ 2028.395047] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 2028.400322] Lustre: Skipped 4 previous similar messages [ 2047.557204] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 2047.561965] Lustre: Skipped 10 previous similar messages [ 2047.586622] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 2047.591225] Lustre: Skipped 10 previous similar messages [ 2047.611801] Lustre: lustre-OST0001: deleting orphan objects from 0x0:2979 to 0x0:3009 [ 2047.611805] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3075 to 0x0:3105 [ 2047.823268] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2051.325349] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2051.877368] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2055.447371] Lustre: DEBUG MARKER: == replay-single test 47: MDS->OSC failure during precreate cleanup (2824) ========================================================== 17:34:23 (1713389663) [ 2056.299418] Lustre: Failing over lustre-OST0000 [ 2056.301610] Lustre: Skipped 16 previous similar messages [ 2056.308719] LustreError: 4449:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2056.312925] LustreError: 4449:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 72 previous similar messages [ 2056.328587] Lustre: server umount lustre-OST0000 complete [ 2056.331157] Lustre: Skipped 16 previous similar messages [ 2056.433196] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2056.438367] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2056.445895] LustreError: Skipped 2 previous similar messages [ 2066.448027] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2066.456488] LustreError: Skipped 3 previous similar messages [ 2070.267115] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2070.878674] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3116 to 0x0:3137 [ 2074.175658] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2074.745911] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2139.780591] Lustre: DEBUG MARKER: == replay-single test 48: MDS->OSC failure during precreate cleanup (2824) ========================================================== 17:35:47 (1713389747) [ 2140.900880] LustreError: 6806:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 2140.905789] LustreError: 6806:0:(osd_handler.c:694:osd_ro()) Skipped 13 previous similar messages [ 2141.230395] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2151.102182] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2151.109242] Lustre: Skipped 30 previous similar messages [ 2151.110031] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2151.110033] LustreError: Skipped 14 previous similar messages [ 2157.121972] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111a9c997 to 0x975df49111a9d870 [ 2157.127999] Lustre: Skipped 14 previous similar messages [ 2158.705864] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2159.765535] Lustre: *** cfs_fail_loc=216, val=0*** [ 2159.765543] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3030 to 0x0:3073 [ 2159.773436] LustreError: 7826:0:(osp_precreate.c:967:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -30 [ 2160.779413] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3148 to 0x0:3169 [ 2223.448083] Lustre: DEBUG MARKER: == replay-single test 50: Double OSC recovery, don't LASSERT (3812) ========================================================== 17:37:11 (1713389831) [ 2224.120506] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 2224.125473] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3180 to 0x0:3201 [ 2224.462288] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3180 to 0x0:3233 [ 2231.597144] Lustre: DEBUG MARKER: == replay-single test 52: time out lock replay (3764) ==== 17:37:19 (1713389839) [ 2251.876538] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2258.731985] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 2258.734501] LustreError: 10922:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff88009a90c280 x1796617151185024/t0(0) o101->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:422/0 lens 328/344 e 0 to 0 dl 1713389872 ref 1 fl Complete:/40/0 rc 0/0 job:'ldlm_lock_repla.0' [ 2265.731532] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:58 [ 2265.775937] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3235 to 0x0:3265 [ 2265.775940] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3084 to 0x0:3105 [ 2267.682626] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2268.209320] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2271.834706] Lustre: DEBUG MARKER: == replay-single test 53a: |X| close request while two MDC requests in flight ========================================================== 17:37:59 (1713389879) [ 2273.246825] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 2274.841077] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2291.146490] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2297.296023] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3107 to 0x0:3137 [ 2297.296037] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3235 to 0x0:3297 [ 2299.225940] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2299.787776] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2303.273765] Lustre: DEBUG MARKER: == replay-single test 53b: |X| open request while two MDC requests in flight ========================================================== 17:38:30 (1713389910) [ 2303.672142] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2306.286221] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2325.291291] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2327.333536] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3107 to 0x0:3169 [ 2327.339525] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3299 to 0x0:3329 [ 2329.221552] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2329.799260] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2333.281329] Lustre: DEBUG MARKER: == replay-single test 53c: |X| open request and close request while two MDC requests in flight ========================================================== 17:39:00 (1713389940) [ 2333.679991] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2336.056401] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2354.537321] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2357.735775] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3299 to 0x0:3361 [ 2357.735796] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3171 to 0x0:3201 [ 2362.123648] Lustre: DEBUG MARKER: == replay-single test 53d: close reply while two MDC requests in flight ========================================================== 17:39:29 (1713389969) [ 2363.506440] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2363.508742] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 2363.511555] LustreError: 19202:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff880131d66880 x1796617151208000/t261993005073(0) o35->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:527/0 lens 392/456 e 0 to 0 dl 1713389977 ref 1 fl Interpret:/0/0 rc 0/0 job:'multiop.0' [ 2382.722243] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2387.533881] Lustre: 21426:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff88013372df00 x1796617151208000/t261993005073(0) o35->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:551/0 lens 392/456 e 0 to 0 dl 1713390001 ref 1 fl Interpret:/2/0 rc 0/0 job:'multiop.0' [ 2387.549031] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3171 to 0x0:3233 [ 2387.549173] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3363 to 0x0:3393 [ 2389.476333] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2390.038864] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2393.443975] Lustre: DEBUG MARKER: == replay-single test 53e: |X| open reply while two MDC requests in flight ========================================================== 17:40:01 (1713390001) [ 2393.834203] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2393.836751] LustreError: 22688:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff8800a6d21c80 x1796617151214464/t266287972368(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:557/0 lens 504/448 e 0 to 0 dl 1713390007 ref 1 fl Interpret:/0/0 rc 0/0 job:'mcreate.0' [ 2396.456989] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2416.900526] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2417.877268] Lustre: 24189:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a6d250c0 x1796617151214464/t266287972368(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:581/0 lens 504/448 e 0 to 0 dl 1713390031 ref 1 fl Interpret:/2/0 rc 0/0 job:'mcreate.0' [ 2417.891347] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3395 to 0x0:3425 [ 2417.896965] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3171 to 0x0:3265 [ 2420.811692] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2421.340525] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2424.803408] Lustre: DEBUG MARKER: == replay-single test 53f: |X| open reply and close reply while two MDC requests in flight ========================================================== 17:40:32 (1713390032) [ 2425.186495] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2425.189956] LustreError: 24189:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff88013372df00 x1796617151221248/t270582939664(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:589/0 lens 504/448 e 0 to 0 dl 1713390039 ref 1 fl Interpret:/0/0 rc 0/0 job:'mcreate.0' [ 2426.497373] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2427.738118] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2446.080070] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2449.222962] Lustre: 26934:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012c40af80 x1796617151221376/t270582939665(0) o35->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:613/0 lens 392/456 e 0 to 0 dl 1713390063 ref 1 fl Interpret:/2/0 rc 0/0 job:'multiop.0' [ 2449.235168] Lustre: 26934:0:(mdt_recovery.c:200:mdt_req_from_lrd()) Skipped 1 previous similar message [ 2449.237948] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3427 to 0x0:3457 [ 2449.237955] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3171 to 0x0:3297 [ 2453.728054] Lustre: DEBUG MARKER: == replay-single test 53g: |X| drop open reply and close request while close and open are both in flight ========================================================== 17:41:01 (1713390061) [ 2454.116100] LustreError: 26931:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff8801337ec740 x1796617151227328/t274877906960(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:617/0 lens 504/448 e 0 to 0 dl 1713390067 ref 1 fl Interpret:/0/0 rc 0/0 job:'mcreate.0' [ 2454.127556] LustreError: 26931:0:(ldlm_lib.c:3225:target_send_reply_msg()) Skipped 1 previous similar message [ 2455.397996] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 2455.400652] Lustre: Skipped 1 previous similar message [ 2456.924341] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2475.253829] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2478.154079] Lustre: 29467:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff880131f95f00 x1796617151227328/t274877906960(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:641/0 lens 504/448 e 0 to 0 dl 1713390091 ref 1 fl Interpret:/2/0 rc 0/0 job:'mcreate.0' [ 2478.174071] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3427 to 0x0:3489 [ 2478.174096] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3299 to 0x0:3329 [ 2482.365130] Lustre: DEBUG MARKER: == replay-single test 53h: open request and close reply while two MDC requests in flight ========================================================== 17:41:29 (1713390089) [ 2482.772237] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2484.101971] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2484.105636] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 2484.110433] Lustre: Skipped 2 previous similar messages [ 2486.407132] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2504.451138] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2506.796786] Lustre: 31849:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012b055580 x1796617151233600/t279172874256(0) o35->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:670/0 lens 392/456 e 0 to 0 dl 1713390120 ref 1 fl Interpret:/2/0 rc 0/0 job:'multiop.0' [ 2506.811461] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3491 to 0x0:3521 [ 2506.811489] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3299 to 0x0:3361 [ 2511.314341] Lustre: DEBUG MARKER: == replay-single test 55: let MDS_CHECK_RESENT return the original return code instead of 0 ========================================================== 17:41:58 (1713390118) [ 2511.662141] Lustre: *** cfs_fail_loc=12b, val=2147483991*** [ 2511.665446] LustreError: 31846:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff8800a6ec4c00 x1796617151238400/t283467841550(0) o101->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:675/0 lens 664/600 e 0 to 0 dl 1713390125 ref 1 fl Interpret:/0/0 rc 301/0 job:'touch.0' [ 2511.676755] LustreError: 31846:0:(ldlm_lib.c:3225:target_send_reply_msg()) Skipped 1 previous similar message [ 2518.661392] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnecting [ 2518.664652] Lustre: Skipped 1 previous similar message [ 2518.667183] Lustre: 31847:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a6ec2ac0 x1796617151238400/t283467841550(0) o101->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:682/0 lens 664/3424 e 0 to 0 dl 1713390132 ref 1 fl Interpret:/2/0 rc 0/0 job:'touch.0' [ 2520.602133] Lustre: DEBUG MARKER: == replay-single test 56: don't replay a symlink open request (3440) ========================================================== 17:42:08 (1713390128) [ 2521.955870] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2538.680478] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2547.734126] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3491 to 0x0:3553 [ 2547.734137] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3363 to 0x0:3393 [ 2549.675045] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2550.239184] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2563.469025] Lustre: DEBUG MARKER: == replay-single test 57: test recovery from llog for setattr op ========================================================== 17:42:51 (1713390171) [ 2565.162006] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2580.407355] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 2580.410542] Lustre: Skipped 45 previous similar messages [ 2580.600532] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2580.604288] Lustre: Skipped 14 previous similar messages [ 2580.633107] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 2580.636827] Lustre: Skipped 16 previous similar messages [ 2580.774532] Lustre: lustre-OST0000: deleting orphan objects from 0x0:3555 to 0x0:3585 [ 2581.903582] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2585.776591] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2586.313091] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2588.588616] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 2593.704317] Lustre: DEBUG MARKER: == replay-single test 58a: test recovery from llog for setattr op (test llog_gen_rec) ========================================================== 17:43:21 (1713390201) [ 2606.544691] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2618.622003] Lustre: 2851:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713390219/real 1713390219] req@ffff880130fb3dc0 x1796617154557760/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713390226 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:0.0' [ 2618.634945] Lustre: 2851:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 82 previous similar messages [ 2626.128774] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2634.896321] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3363 to 0x0:3425 [ 2634.896322] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6086 to 0x0:6113 [ 2636.394709] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2636.863195] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2652.221811] Lustre: DEBUG MARKER: == replay-single test 58b: test replay of setxattr op ==== 17:44:19 (1713390259) [ 2653.304846] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2670.319023] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 2670.320867] Lustre: Skipped 14 previous similar messages [ 2670.342124] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 2670.343976] Lustre: Skipped 14 previous similar messages [ 2670.355567] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3427 to 0x0:3457 [ 2670.355568] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6086 to 0x0:6145 [ 2670.731976] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2674.347698] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2674.916295] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2678.036626] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount FULL mgc.*.mgs_server_uuid [ 2678.419524] Lustre: DEBUG MARKER: mgc.*.mgs_server_uuid in FULL state after 0 sec [ 2680.313420] Lustre: DEBUG MARKER: == replay-single test 58c: resend/reconstruct setxattr op ========================================================== 17:44:48 (1713390288) [ 2685.851613] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 2693.355249] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2693.356361] Lustre: Skipped 1 previous similar message [ 2693.357260] LustreError: 11789:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff88009fb69850 x1796617152707584/t300647710728(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:102/0 lens 66040/440 e 0 to 0 dl 1713390307 ref 1 fl Interpret:/0/0 rc 0/0 job:'setfattr.0' [ 2700.355397] Lustre: 11791:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff880093862600 x1796617152707584/t300647710728(0) o36->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:109/0 lens 66040/440 e 0 to 0 dl 1713390314 ref 1 fl Interpret:/2/0 rc 0/0 job:'setfattr.0' [ 2702.916349] Lustre: DEBUG MARKER: SKIP: replay-single test_59 skipping ALWAYS excluded test 59 [ 2703.261828] Lustre: DEBUG MARKER: == replay-single test 60: test llog post recovery init vs llog unlink ========================================================== 17:45:11 (1713390311) [ 2705.793212] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2706.698710] Lustre: Failing over lustre-MDT0000 [ 2706.700038] Lustre: Skipped 14 previous similar messages [ 2706.748415] LustreError: 15176:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 2706.750648] LustreError: 15176:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 75 previous similar messages [ 2706.813469] Lustre: server umount lustre-MDT0000 complete [ 2706.814734] Lustre: Skipped 14 previous similar messages [ 2725.551045] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2731.628733] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6246 to 0x0:6273 [ 2731.629301] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3559 to 0x0:3585 [ 2733.249477] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2733.797783] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2737.564706] Lustre: DEBUG MARKER: == replay-single test 61a: test race llog recovery vs llog cleanup ========================================================== 17:45:45 (1713390345) [ 2741.822634] LustreError: 18098:0:(osd_handler.c:694:osd_ro()) lustre-OST0000: *** setting device osd-zfs read-only *** [ 2741.827675] LustreError: 18098:0:(osd_handler.c:694:osd_ro()) Skipped 12 previous similar messages [ 2742.132455] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 2746.417905] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2746.425531] LustreError: Skipped 1 previous similar message [ 2746.618843] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_destroy to node 0@lo failed: rc = -107 [ 2749.206666] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2749.215567] LustreError: Skipped 1 previous similar message [ 2754.214610] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2754.221519] LustreError: Skipped 1 previous similar message [ 2758.509365] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2758.625345] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6674 to 0x0:6689 [ 2771.467210] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2771.474635] LustreError: Skipped 1 previous similar message [ 2772.182469] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2772.189586] Lustre: Skipped 31 previous similar messages [ 2783.946188] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6674 to 0x0:6721 [ 2783.991871] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2787.986661] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2788.569793] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2822.606606] Lustre: DEBUG MARKER: == replay-single test 61b: test race mds llog sync vs llog cleanup ========================================================== 17:47:10 (1713390430) [ 2834.710036] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2834.715660] LustreError: Skipped 14 previous similar messages [ 2840.718574] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111ae5be0 to 0x975df49111af606f [ 2840.723735] Lustre: Skipped 14 previous similar messages [ 2842.211340] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2850.608643] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6674 to 0x0:6753 [ 2850.608645] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3986 to 0x0:4001 [ 2871.358980] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2879.629206] Lustre: lustre-OST0001: deleting orphan objects from 0x0:3986 to 0x0:4033 [ 2879.629707] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6674 to 0x0:6785 [ 2881.578148] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2882.153433] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2885.667176] Lustre: DEBUG MARKER: == replay-single test 61c: test race mds llog sync vs llog cleanup ========================================================== 17:48:13 (1713390493) [ 2899.633281] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2899.641032] LustreError: Skipped 5 previous similar messages [ 2899.670495] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2910.711710] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2911.497001] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6787 to 0x0:6817 [ 2914.744498] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2915.300828] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2919.096161] Lustre: DEBUG MARKER: == replay-single test 61d: error in llog_setup should cleanup the llog context correctly ========================================================== 17:48:46 (1713390526) [ 2923.032115] Lustre: *** cfs_fail_loc=605, val=0*** [ 2923.034501] LustreError: 29662:0:(llog_obd.c:207:llog_setup()) MGS: ctxt 0 lop_setup=ffffffffa05542c0 failed: rc = -95 [ 2923.039573] LustreError: 29662:0:(obd_config.c:774:class_setup()) setup MGS failed (-95) [ 2923.043425] LustreError: 29662:0:(obd_mount.c:200:lustre_start_simple()) MGS setup error -95 [ 2923.047280] LustreError: 29662:0:(obd_mount_server.c:131:server_deregister_mount()) MGS not registered [ 2923.051494] LustreError: 15e-a: Failed to start MGS 'MGS' (-95). Is the 'mgs' module loaded? [ 2923.055340] LustreError: 29662:0:(obd_mount_server.c:1644:server_put_super()) no obd lustre-MDT0000 [ 2923.070800] LustreError: 29662:0:(super25.c:183:lustre_fill_super()) llite: Unable to mount : rc = -95 [ 2926.364008] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2928.945417] Lustre: DEBUG MARKER: == replay-single test 62: don't mis-drop resent replay === 17:48:56 (1713390536) [ 2929.014402] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4035 to 0x0:4065 [ 2929.014446] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6787 to 0x0:6849 [ 2930.384946] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2949.597986] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 2958.704031] Lustre: *** cfs_fail_loc=707, val=0*** [ 2965.717102] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:58 [ 2965.902788] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4079 to 0x0:4097 [ 2965.902799] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6862 to 0x0:6881 [ 2967.799419] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2968.287856] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2972.243464] Lustre: DEBUG MARKER: == replay-single test 65a: AT: verify early replies ====== 17:49:39 (1713390579) [ 2995.562198] LustreError: 480:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 6000ms [ 3001.566988] LustreError: 480:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a awake [ 3013.191115] Lustre: DEBUG MARKER: == replay-single test 65b: AT: verify early replies on packed reply / bulk ========================================================== 17:50:20 (1713390620) [ 3036.627298] LustreError: 14139:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 224 sleeping for 6000ms [ 3042.630993] LustreError: 14139:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 224 awake [ 3045.738319] Lustre: DEBUG MARKER: == replay-single test 66a: AT: verify MDT service time adjusts with no early replies ========================================================== 17:50:53 (1713390653) [ 3068.755993] LustreError: 1007:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 5000ms [ 3073.760019] LustreError: 1007:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a awake [ 3074.567125] LustreError: 32270:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 10000ms [ 3084.571990] LustreError: 32270:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a awake [ 3096.416001] Lustre: DEBUG MARKER: == replay-single test 66b: AT: verify net latency adjusts ========================================================== 17:51:43 (1713390703) [ 3145.046527] Lustre: DEBUG MARKER: == replay-single test 67a: AT: verify slow request processing doesn't induce reconnects ========================================================== 17:52:32 (1713390752) [ 3168.049251] LustreError: 32270:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 400ms [ 3168.453004] LustreError: 32270:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a awake [ 3176.500988] LustreError: 32270:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a awake [ 3176.507296] LustreError: 32270:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 17 previous similar messages [ 3176.516713] LustreError: 32272:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 400ms [ 3176.521212] LustreError: 32272:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 18 previous similar messages [ 3192.635991] LustreError: 32270:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a awake [ 3192.640602] LustreError: 32270:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 44 previous similar messages [ 3192.649307] LustreError: 32270:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 50a sleeping for 400ms [ 3192.658642] LustreError: 32270:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 44 previous similar messages [ 3200.026774] Lustre: DEBUG MARKER: == replay-single test 67b: AT: verify instant slowdown doesn't induce reconnects ========================================================== 17:53:27 (1713390807) [ 3223.880344] Lustre: DEBUG MARKER: phase 2 [ 3226.979606] Lustre: DEBUG MARKER: == replay-single test 68: AT: verify slowing locks ======= 17:53:54 (1713390834) [ 3297.034388] Lustre: DEBUG MARKER: == replay-single test 70a: check multi client t-f ======== 17:55:04 (1713390904) [ 3297.536841] Lustre: DEBUG MARKER: SKIP: replay-single test_70a Need two or more clients, have 1 [ 3298.114087] Lustre: DEBUG MARKER: == replay-single test 70b: dbench 1mdts recovery; 1 clients ========================================================== 17:55:05 (1713390905) [ 3299.827837] Lustre: DEBUG MARKER: Started rundbench load pid=11998 ... [ 3302.268098] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3303.801698] Lustre: DEBUG MARKER: test_70b fail mds1 1 times [ 3311.918117] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713390912/real 1713390912] req@ffff880136069c80 x1796617155235968/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713390919 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:2.0' [ 3311.934346] Lustre: 2852:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 35 previous similar messages [ 3322.927369] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 3322.930797] Lustre: Skipped 26 previous similar messages [ 3323.114272] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3323.117433] Lustre: Skipped 10 previous similar messages [ 3323.141310] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3323.144430] Lustre: Skipped 10 previous similar messages [ 3324.397257] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3325.289637] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 3325.294165] Lustre: Skipped 8 previous similar messages [ 3325.629439] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 3325.633935] Lustre: Skipped 8 previous similar messages [ 3325.657223] Lustre: lustre-OST0000: deleting orphan objects from 0x0:6996 to 0x0:7041 [ 3325.657275] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4191 to 0x0:4225 [ 3328.337787] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3328.909982] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3332.629569] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3334.175837] Lustre: DEBUG MARKER: test_70b fail mds1 2 times [ 3334.897103] Lustre: Failing over lustre-MDT0000 [ 3334.899423] Lustre: Skipped 8 previous similar messages [ 3334.931605] LustreError: 15523:0:(ldlm_lockd.c:1427:ldlm_handle_enqueue0()) ### lock on destroyed export ffff88009cb57800 ns: mdt-lustre-MDT0000_UUID lock: ffff8800a0594900/0x975df49111b09f43 lrc: 3/0,0 mode: CW/CW res: [0x20001b1b3:0xf96:0x0].0x0 bits 0x5/0x0 rrc: 2 type: IBT gid 0 flags: 0x50306400000000 nid: 192.168.204.28@tcp remote: 0x3ab18491c5b9fdf1 expref: 4 pid: 15523 timeout: 0 lvb_type: 0 [ 3335.311463] Lustre: lustre-MDT0000: Not available for connect from 192.168.204.28@tcp (stopping) [ 3339.126836] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3339.130313] Lustre: Skipped 1 previous similar message [ 3340.319820] Lustre: lustre-MDT0000: Not available for connect from 192.168.204.28@tcp (stopping) [ 3340.958775] LustreError: 17203:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 3340.963336] LustreError: 17203:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 43 previous similar messages [ 3341.051291] Lustre: server umount lustre-MDT0000 complete [ 3341.053828] Lustre: Skipped 9 previous similar messages [ 3358.677685] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3366.672932] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4286 to 0x0:4321 [ 3366.672948] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7102 to 0x0:7137 [ 3368.640443] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3369.215759] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3372.556524] LustreError: 20267:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 3372.562470] LustreError: 20267:0:(osd_handler.c:694:osd_ro()) Skipped 3 previous similar messages [ 3372.827668] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3374.359793] Lustre: DEBUG MARKER: test_70b fail mds1 3 times [ 3375.090793] Lustre: lustre-MDT0000: Not available for connect from 192.168.204.28@tcp (stopping) [ 3375.091177] LustreError: 2850:0:(client.c:1256:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff88009c8a8e40 x1796617155361088/t0(0) o6->lustre-OST0001-osc-MDT0000@0@lo:28/4 lens 544/432 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'osp-syn-1-0.0' [ 3382.318092] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3382.324897] Lustre: Skipped 13 previous similar messages [ 3394.882435] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3403.719772] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7197 to 0x0:7233 [ 3403.719791] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4381 to 0x0:4417 [ 3405.652932] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3406.207723] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3410.007353] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3411.526928] Lustre: DEBUG MARKER: test_70b fail mds1 4 times [ 3412.206671] LustreError: 22689:0:(ldlm_lockd.c:1427:ldlm_handle_enqueue0()) ### lock on destroyed export ffff880094721800 ns: mdt-lustre-MDT0000_UUID lock: ffff880094498240/0x975df49111b1e47d lrc: 3/0,0 mode: CW/CW res: [0x20001b1b3:0x107a:0x0].0x0 bits 0x5/0x0 rrc: 2 type: IBT gid 0 flags: 0x50200000000000 nid: 192.168.204.28@tcp remote: 0x3ab18491c5ba6221 expref: 4 pid: 22689 timeout: 0 lvb_type: 0 [ 3413.375659] Lustre: lustre-MDT0000: Not available for connect from 192.168.204.28@tcp (stopping) [ 3434.140396] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3434.753520] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7288 to 0x0:7329 [ 3434.753758] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4472 to 0x0:4513 [ 3437.557788] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3437.972307] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3441.491571] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3442.884631] Lustre: DEBUG MARKER: test_70b fail mds1 5 times [ 3452.766065] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3452.771898] LustreError: Skipped 7 previous similar messages [ 3458.774618] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111b1e4b5 to 0x975df49111b2aaf3 [ 3458.777627] Lustre: Skipped 7 previous similar messages [ 3459.751608] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3471.884753] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4580 to 0x0:4609 [ 3471.884774] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7396 to 0x0:7425 [ 3473.831979] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3474.395443] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3478.074498] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3479.608054] Lustre: DEBUG MARKER: test_70b fail mds1 6 times [ 3497.452483] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3508.970650] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4666 to 0x0:4705 [ 3508.971372] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7482 to 0x0:7521 [ 3510.830725] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3511.355047] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3515.190643] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3516.748657] Lustre: DEBUG MARKER: test_70b fail mds1 7 times [ 3536.662397] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3545.909924] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4757 to 0x0:4801 [ 3545.909933] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7573 to 0x0:7617 [ 3547.824743] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3548.396570] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3552.148859] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3553.684613] Lustre: DEBUG MARKER: test_70b fail mds1 8 times [ 3569.870421] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3582.906959] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7662 to 0x0:7681 [ 3582.906962] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4847 to 0x0:4865 [ 3584.901637] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3585.485015] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3589.244911] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3590.786047] Lustre: DEBUG MARKER: test_70b fail mds1 9 times [ 3591.483554] Lustre: lustre-MDT0000: Not available for connect from 192.168.204.28@tcp (stopping) [ 3591.487512] Lustre: Skipped 2 previous similar messages [ 3609.096769] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3619.804422] Lustre: lustre-OST0000: deleting orphan objects from 0x0:7733 to 0x0:7777 [ 3619.806113] Lustre: lustre-OST0001: deleting orphan objects from 0x0:4917 to 0x0:4961 [ 3621.787456] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3622.369233] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3661.904915] Lustre: DEBUG MARKER: == replay-single test 70c: tar 1mdts recovery ============ 18:01:09 (1713391269) [ 3783.523343] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3794.095682] Lustre: DEBUG MARKER: test_70c fail mds1 1 times [ 3813.559041] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3830.779395] Lustre: lustre-OST0001: deleting orphan objects from 0x0:6967 to 0x0:7009 [ 3830.779496] Lustre: lustre-OST0000: deleting orphan objects from 0x0:9782 to 0x0:9825 [ 3832.448390] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3833.022140] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3955.802652] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3966.339571] Lustre: DEBUG MARKER: test_70c fail mds1 2 times [ 3967.015872] Lustre: Failing over lustre-MDT0000 [ 3967.018261] Lustre: Skipped 8 previous similar messages [ 3967.097776] LustreError: 12219:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 3967.102985] LustreError: 12219:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 41 previous similar messages [ 3967.197985] Lustre: server umount lustre-MDT0000 complete [ 3967.199265] Lustre: Skipped 8 previous similar messages [ 3974.462039] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713391575/real 1713391575] req@ffff8801322bc280 x1796617157190016/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713391582 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:0.0' [ 3974.474532] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 42 previous similar messages [ 3985.471297] Lustre: MGC192.168.204.128@tcp: Connection restored to (at 0@lo) [ 3985.475223] Lustre: Skipped 27 previous similar messages [ 3985.654362] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3985.660781] Lustre: Skipped 9 previous similar messages [ 3985.684662] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3985.687338] Lustre: Skipped 9 previous similar messages [ 3986.945111] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 3999.091193] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 3999.093789] Lustre: Skipped 9 previous similar messages [ 4002.822980] Lustre: lustre-MDT0000: Recovery over after 0:04, of 1 clients 1 recovered and 0 were evicted. [ 4002.827212] Lustre: Skipped 9 previous similar messages [ 4002.848004] Lustre: lustre-OST0001: deleting orphan objects from 0x0:8538 to 0x0:8577 [ 4002.848525] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11354 to 0x0:11393 [ 4004.603185] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4005.153903] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4065.980707] Lustre: DEBUG MARKER: == replay-single test 70d: mkdir/rmdir striped dir 1mdts recovery ========================================================== 18:07:53 (1713391673) [ 4066.467930] Lustre: DEBUG MARKER: SKIP: replay-single test_70d needs >= 2 MDTs [ 4067.049617] Lustre: DEBUG MARKER: == replay-single test 70e: rename cross-MDT with random fails ========================================================== 18:07:54 (1713391674) [ 4067.535443] Lustre: DEBUG MARKER: SKIP: replay-single test_70e needs >= 2 MDTs [ 4068.121557] Lustre: DEBUG MARKER: == replay-single test 70f: OSS O_DIRECT recovery with 1 clients ========================================================== 18:07:55 (1713391675) [ 4072.453770] LustreError: 19814:0:(osd_handler.c:694:osd_ro()) lustre-OST0000: *** setting device osd-zfs read-only *** [ 4072.458430] LustreError: 19814:0:(osd_handler.c:694:osd_ro()) Skipped 8 previous similar messages [ 4072.791108] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4074.299535] Lustre: DEBUG MARKER: test_70f failing OST 1 times [ 4076.790435] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4076.800556] Lustre: Skipped 14 previous similar messages [ 4076.804322] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4076.811539] LustreError: Skipped 4 previous similar messages [ 4081.798627] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4081.808800] LustreError: Skipped 1 previous similar message [ 4089.007188] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4089.235048] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11814 to 0x0:11841 [ 4092.882666] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4093.321880] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4100.966340] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4102.462836] Lustre: DEBUG MARKER: test_70f failing OST 2 times [ 4104.193426] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4104.201103] LustreError: Skipped 2 previous similar messages [ 4104.262332] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4117.136176] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4117.777316] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11814 to 0x0:11873 [ 4120.901416] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4121.462444] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4129.164013] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4130.707058] Lustre: DEBUG MARKER: test_70f failing OST 3 times [ 4132.806513] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4132.811423] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4132.818835] LustreError: Skipped 5 previous similar messages [ 4145.378921] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4146.105014] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11814 to 0x0:11905 [ 4149.256776] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4149.842840] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4157.529228] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4159.064205] Lustre: DEBUG MARKER: test_70f failing OST 4 times [ 4161.126400] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4169.110473] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4169.117431] LustreError: Skipped 7 previous similar messages [ 4172.708021] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4173.894800] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11814 to 0x0:11937 [ 4175.960287] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4176.406804] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4184.188870] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4185.710475] Lustre: DEBUG MARKER: test_70f failing OST 5 times [ 4200.172953] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11814 to 0x0:11969 [ 4200.238959] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4204.131070] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4204.708831] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4210.411031] Lustre: DEBUG MARKER: == replay-single test 71a: mkdir/rmdir striped dir with 2 mdts recovery ========================================================== 18:10:17 (1713391817) [ 4210.929797] Lustre: DEBUG MARKER: SKIP: replay-single test_71a needs >= 2 MDTs [ 4211.529807] Lustre: DEBUG MARKER: == replay-single test 73a: open(O_CREAT), unlink, replay, reconnect before open replay, close ========================================================== 18:10:19 (1713391819) [ 4212.936581] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4226.086053] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4226.092022] LustreError: Skipped 6 previous similar messages [ 4232.094984] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111d24032 to 0x975df49111d9c58d [ 4232.102465] Lustre: Skipped 6 previous similar messages [ 4233.567118] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4238.402579] Lustre: *** cfs_fail_loc=302, val=2147483648*** [ 4245.415712] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:58 [ 4245.481061] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11971 to 0x0:12001 [ 4245.481063] Lustre: lustre-OST0001: deleting orphan objects from 0x0:8999 to 0x0:9025 [ 4247.348018] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4247.902198] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4251.389320] Lustre: DEBUG MARKER: == replay-single test 73b: open(O_CREAT), unlink, replay, reconnect at open_replay reply, close ========================================================== 18:10:58 (1713391858) [ 4252.777221] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4271.447239] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 4271.449823] LustreError: 23953:0:(ldlm_lib.c:3225:target_send_reply_msg()) @@@ dropping reply req@ffff880091c876c0 x1796617172698752/t373662154755(373662154755) o101->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:170/0 lens 592/600 e 0 to 0 dl 1713391885 ref 1 fl Interpret:/4/0 rc 301/0 job:'multiop.0' [ 4271.766584] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4278.455791] Lustre: lustre-MDT0000: Client 02946759-878f-452d-9e9a-309fb22e4b75 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:58 [ 4278.464087] Lustre: 23953:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff8801372d4c00 x1796617172698752/t373662154755(373662154755) o101->02946759-878f-452d-9e9a-309fb22e4b75@192.168.204.28@tcp:177/0 lens 592/3424 e 0 to 0 dl 1713391892 ref 1 fl Interpret:/6/0 rc 0/0 job:'multiop.0' [ 4278.513989] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11971 to 0x0:12033 [ 4278.514003] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9027 to 0x0:9057 [ 4280.390765] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4280.931211] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4284.403484] Lustre: DEBUG MARKER: == replay-single test 74: Ensure applications don't fail waiting for OST recovery ========================================================== 18:11:31 (1713391891) [ 4286.486925] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4286.493921] LustreError: Skipped 6 previous similar messages [ 4304.716487] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9027 to 0x0:9089 [ 4306.008608] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4309.325914] Lustre: lustre-OST0000: Denying connection for new client bad16f88-6ff2-4240-91ba-b7bc11a89c9e (at 192.168.204.28@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 4309.334314] Lustre: Skipped 10 previous similar messages [ 4309.583017] Lustre: lustre-OST0000: deleting orphan objects from 0x0:11971 to 0x0:12065 [ 4309.685297] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4315.367803] Lustre: DEBUG MARKER: == replay-single test 80a: DNE: create remote dir, drop update rep from MDT0, fail MDT0 ========================================================== 18:12:02 (1713391922) [ 4315.848791] Lustre: DEBUG MARKER: SKIP: replay-single test_80a needs >= 2 MDTs [ 4316.445474] Lustre: DEBUG MARKER: == replay-single test 80b: DNE: create remote dir, drop update rep from MDT0, fail MDT1 ========================================================== 18:12:04 (1713391924) [ 4316.948665] Lustre: DEBUG MARKER: SKIP: replay-single test_80b needs >= 2 MDTs [ 4317.525792] Lustre: DEBUG MARKER: == replay-single test 80c: DNE: create remote dir, drop update rep from MDT1, fail MDT[0,1] ========================================================== 18:12:05 (1713391925) [ 4318.044153] Lustre: DEBUG MARKER: SKIP: replay-single test_80c needs >= 2 MDTs [ 4318.627337] Lustre: DEBUG MARKER: == replay-single test 80d: DNE: create remote dir, drop update rep from MDT1, fail 2 MDTs ========================================================== 18:12:06 (1713391926) [ 4319.141349] Lustre: DEBUG MARKER: SKIP: replay-single test_80d needs >= 2 MDTs [ 4319.730845] Lustre: DEBUG MARKER: == replay-single test 80e: DNE: create remote dir, drop MDT1 rep, fail MDT0 ========================================================== 18:12:07 (1713391927) [ 4320.229967] Lustre: DEBUG MARKER: SKIP: replay-single test_80e needs >= 2 MDTs [ 4320.750329] Lustre: DEBUG MARKER: == replay-single test 80f: DNE: create remote dir, drop MDT1 rep, fail MDT1 ========================================================== 18:12:08 (1713391928) [ 4321.223690] Lustre: DEBUG MARKER: SKIP: replay-single test_80f needs >= 2 MDTs [ 4321.806770] Lustre: DEBUG MARKER: == replay-single test 80g: DNE: create remote dir, drop MDT1 rep, fail MDT0, then MDT1 ========================================================== 18:12:09 (1713391929) [ 4322.291378] Lustre: DEBUG MARKER: SKIP: replay-single test_80g needs >= 2 MDTs [ 4322.883515] Lustre: DEBUG MARKER: == replay-single test 80h: DNE: create remote dir, drop MDT1 rep, fail 2 MDTs ========================================================== 18:12:10 (1713391930) [ 4323.383934] Lustre: DEBUG MARKER: SKIP: replay-single test_80h needs >= 2 MDTs [ 4323.964068] Lustre: DEBUG MARKER: == replay-single test 81a: DNE: unlink remote dir, drop MDT0 update rep, fail MDT1 ========================================================== 18:12:11 (1713391931) [ 4324.458319] Lustre: DEBUG MARKER: SKIP: replay-single test_81a needs >= 2 MDTs [ 4325.053850] Lustre: DEBUG MARKER: == replay-single test 81b: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0 ========================================================== 18:12:12 (1713391932) [ 4325.545829] Lustre: DEBUG MARKER: SKIP: replay-single test_81b needs >= 2 MDTs [ 4326.129766] Lustre: DEBUG MARKER: == replay-single test 81c: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0,MDT1 ========================================================== 18:12:13 (1713391933) [ 4326.619448] Lustre: DEBUG MARKER: SKIP: replay-single test_81c needs >= 2 MDTs [ 4327.203400] Lustre: DEBUG MARKER: == replay-single test 81d: DNE: unlink remote dir, drop MDT0 update reply, fail 2 MDTs ========================================================== 18:12:14 (1713391934) [ 4327.703957] Lustre: DEBUG MARKER: SKIP: replay-single test_81d needs >= 2 MDTs [ 4328.292354] Lustre: DEBUG MARKER: == replay-single test 81e: DNE: unlink remote dir, drop MDT1 req reply, fail MDT0 ========================================================== 18:12:15 (1713391935) [ 4328.793518] Lustre: DEBUG MARKER: SKIP: replay-single test_81e needs >= 2 MDTs [ 4329.385354] Lustre: DEBUG MARKER: == replay-single test 81f: DNE: unlink remote dir, drop MDT1 req reply, fail MDT1 ========================================================== 18:12:16 (1713391936) [ 4329.885330] Lustre: DEBUG MARKER: SKIP: replay-single test_81f needs >= 2 MDTs [ 4330.472956] Lustre: DEBUG MARKER: == replay-single test 81g: DNE: unlink remote dir, drop req reply, fail M0, then M1 ========================================================== 18:12:18 (1713391938) [ 4330.970150] Lustre: DEBUG MARKER: SKIP: replay-single test_81g needs >= 2 MDTs [ 4331.563268] Lustre: DEBUG MARKER: == replay-single test 81h: DNE: unlink remote dir, drop request reply, fail 2 MDTs ========================================================== 18:12:19 (1713391939) [ 4332.079701] Lustre: DEBUG MARKER: SKIP: replay-single test_81h needs >= 2 MDTs [ 4332.699108] Lustre: DEBUG MARKER: == replay-single test 84a: stale open during export disconnect ========================================================== 18:12:20 (1713391940) [ 4333.360577] Lustre: 30795:0:(genops.c:1710:obd_export_evict_by_uuid()) lustre-MDT0000: evicting bad16f88-6ff2-4240-91ba-b7bc11a89c9e at adminstrative request [ 4338.132107] Lustre: DEBUG MARKER: == replay-single test 85a: check the cancellation of unused locks during recovery(IBITS) ========================================================== 18:12:25 (1713391945) [ 4355.107468] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4355.179008] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12117 to 0x0:12161 [ 4355.179260] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9141 to 0x0:9185 [ 4359.003385] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4359.551464] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4363.101535] Lustre: DEBUG MARKER: == replay-single test 85b: check the cancellation of unused locks during recovery(EXTENT) ========================================================== 18:12:50 (1713391970) [ 4368.790330] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4381.786874] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4382.440288] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12262 to 0x0:12289 [ 4385.688139] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4386.242177] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4390.009829] Lustre: DEBUG MARKER: == replay-single test 86: umount server after clear nid_stats should not hit LBUG ========================================================== 18:13:17 (1713391997) [ 4393.932176] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12262 to 0x0:12321 [ 4393.932181] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9141 to 0x0:9217 [ 4395.173208] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4398.202030] Lustre: DEBUG MARKER: == replay-single test 87a: write replay ================== 18:13:25 (1713392005) [ 4399.891982] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4403.862396] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4414.773098] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4415.561140] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12323 to 0x0:12353 [ 4418.671246] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4419.213999] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4422.800175] Lustre: DEBUG MARKER: == replay-single test 87b: write replay with changed data (checksum resend) ========================================================== 18:13:50 (1713392030) [ 4424.471644] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4428.502719] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4428.509926] LustreError: Skipped 14 previous similar messages [ 4440.311979] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4440.763376] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.204.28@tcp inode [0x20002caf1:0x5:0x0] object 0x0:12354 extent [0-1048575]: client csum cfc14393, server csum 72ae669e [ 4440.866972] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12355 to 0x0:12385 [ 4444.225146] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4444.797099] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4448.188357] Lustre: DEBUG MARKER: == replay-single test 88: MDS should not assign same objid to different files ========================================================== 18:14:15 (1713392055) [ 4449.569382] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4450.968029] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4454.589664] LustreError: 11723:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713392062 with bad export cookie 10907142776468153707 [ 4489.545884] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4496.396099] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9258 to 0x0:9281 [ 4503.239178] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4503.288044] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12355 to 0x0:12417 [ 4509.125591] Lustre: DEBUG MARKER: == replay-single test 89: no disk space leak on late ost connection ========================================================== 18:15:16 (1713392116) [ 4521.230014] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_disconnect to node 0@lo failed: rc = -107 [ 4536.461381] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4539.313063] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9258 to 0x0:9313 [ 4542.128736] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4542.927718] Lustre: lustre-OST0000: Denying connection for new client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 1:08 [ 4562.959467] Lustre: lustre-OST0000: Denying connection for new client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:48 [ 4562.968114] Lustre: Skipped 3 previous similar messages [ 4598.015364] Lustre: lustre-OST0000: Denying connection for new client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:13 [ 4598.024032] Lustre: Skipped 6 previous similar messages [ 4611.471030] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 4611.475343] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 4611.494593] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.204.128@tcp (at 0@lo) [ 4611.494624] Lustre: lustre-OST0000: Recovery over after 1:10, of 2 clients 1 recovered and 1 was evicted. [ 4611.494625] Lustre: Skipped 15 previous similar messages [ 4611.497526] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12428 to 0x0:12449 [ 4611.508495] Lustre: Skipped 30 previous similar messages [ 4614.488112] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 68 sec [ 4634.574078] Lustre: DEBUG MARKER: free_before: 7518208 free_after: 7518208 [ 4636.794767] Lustre: DEBUG MARKER: == replay-single test 90: lfs find identifies the missing striped file segments ========================================================== 18:17:24 (1713392244) [ 4638.258734] Lustre: Failing over lustre-OST0001 [ 4638.261647] Lustre: Skipped 18 previous similar messages [ 4638.268091] LustreError: 19675:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 4638.272493] LustreError: 19675:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 90 previous similar messages [ 4638.288717] Lustre: server umount lustre-OST0001 complete [ 4638.291290] Lustre: Skipped 18 previous similar messages [ 4640.342555] LustreError: 11-0: lustre-OST0001-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4651.164763] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 4651.169872] Lustre: Skipped 18 previous similar messages [ 4651.174680] Lustre: lustre-OST0001: in recovery but waiting for the first client to connect [ 4651.179108] Lustre: Skipped 16 previous similar messages [ 4652.453076] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4653.009022] Lustre: lustre-OST0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4653.013264] Lustre: Skipped 16 previous similar messages [ 4653.061909] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9316 to 0x0:9345 [ 4656.636418] Lustre: DEBUG MARKER: == replay-single test 93a: replay + reconnect ============ 18:17:44 (1713392264) [ 4671.849883] LustreError: 22323:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 715 sleeping for 40000ms [ 4671.856388] LustreError: 22323:0:(fail.c:138:__cfs_fail_timeout_set()) Skipped 21 previous similar messages [ 4671.962966] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4672.832012] Lustre: *** cfs_fail_loc=715, val=0*** [ 4678.162239] Lustre: lustre-OST0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 2 clients in recovery for 0:59 [ 4678.848018] Lustre: 2849:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713392279/real 1713392279] req@ffff880135aa4740 x1796617157752832/t0(0) o400->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713392286 ref 1 fl Rpc:XQr/c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' [ 4678.860558] Lustre: 2849:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 34 previous similar messages [ 4679.172022] Lustre: *** cfs_fail_loc=715, val=0*** [ 4679.174362] Lustre: Skipped 1 previous similar message [ 4680.174030] Lustre: *** cfs_fail_loc=715, val=0*** [ 4685.170499] Lustre: lustre-OST0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 2 clients in recovery for 0:52 [ 4685.176802] Lustre: Skipped 1 previous similar message [ 4686.182061] Lustre: *** cfs_fail_loc=715, val=0*** [ 4692.181509] Lustre: lustre-OST0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 2 clients in recovery for 0:45 [ 4692.188250] Lustre: Skipped 1 previous similar message [ 4693.194005] Lustre: *** cfs_fail_loc=715, val=0*** [ 4693.196296] Lustre: Skipped 1 previous similar message [ 4706.195438] Lustre: lustre-OST0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 2 clients in recovery for 0:31 [ 4706.201607] Lustre: Skipped 3 previous similar messages [ 4707.205998] Lustre: *** cfs_fail_loc=715, val=0*** [ 4707.208162] Lustre: Skipped 3 previous similar messages [ 4711.863995] LustreError: 22323:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 715 awake [ 4711.868707] LustreError: 22323:0:(fail.c:149:__cfs_fail_timeout_set()) Skipped 21 previous similar messages [ 4711.898849] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12453 to 0x0:12481 [ 4713.799874] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4714.342171] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4717.809906] Lustre: DEBUG MARKER: == replay-single test 93b: replay + reconnect on mds ===== 18:18:45 (1713392325) [ 4728.774042] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4728.780677] Lustre: Skipped 22 previous similar messages [ 4736.261761] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4745.267446] LustreError: 24715:0:(fail.c:138:__cfs_fail_timeout_set()) cfs_fail_timeout id 715 sleeping for 80000ms [ 4746.270087] Lustre: *** cfs_fail_loc=715, val=0*** [ 4746.272310] Lustre: Skipped 1 previous similar message [ 4752.267583] Lustre: lustre-MDT0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:58 [ 4752.273655] Lustre: Skipped 1 previous similar message [ 4781.285991] Lustre: *** cfs_fail_loc=715, val=0*** [ 4781.288081] Lustre: Skipped 4 previous similar messages [ 4787.285566] Lustre: lustre-MDT0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:23 [ 4787.291644] Lustre: Skipped 4 previous similar messages [ 4815.301443] Lustre: lustre-MDT0000: Recovery already passed deadline 0:04. If you do not want to wait more, you may force taget eviction via 'lctl --device lustre-MDT0000 abort_recovery. [ 4822.310480] Lustre: lustre-MDT0000: Recovery already passed deadline 0:11. If you do not want to wait more, you may force taget eviction via 'lctl --device lustre-MDT0000 abort_recovery. [ 4825.273111] LustreError: 24715:0:(fail.c:149:__cfs_fail_timeout_set()) cfs_fail_timeout id 715 awake [ 4825.292024] Lustre: 24715:0:(ldlm_lib.c:2830:target_recovery_thread()) too long recovery - read logs [ 4825.296696] LustreError: dumping log to /tmp/lustre-log.1713392433.24715 [ 4825.344280] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12492 to 0x0:12513 [ 4825.344292] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9356 to 0x0:9377 [ 4827.186263] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4827.725204] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4831.183737] Lustre: DEBUG MARKER: == replay-single test 100a: DNE: create striped dir, drop update rep from MDT1, fail MDT1 ========================================================== 18:20:38 (1713392438) [ 4831.678614] Lustre: DEBUG MARKER: SKIP: replay-single test_100a needs >= 2 MDTs [ 4832.263423] Lustre: DEBUG MARKER: == replay-single test 100b: DNE: create striped dir, fail MDT0 ========================================================== 18:20:39 (1713392439) [ 4832.772045] Lustre: DEBUG MARKER: SKIP: replay-single test_100b needs >= 2 MDTs [ 4833.359734] Lustre: DEBUG MARKER: == replay-single test 100c: DNE: create striped dir, fail MDT0 ========================================================== 18:20:40 (1713392440) [ 4833.861736] Lustre: DEBUG MARKER: SKIP: replay-single test_100c needs >= 2 MDTs [ 4834.465072] Lustre: DEBUG MARKER: == replay-single test 101: Shouldn't reassign precreated objs to other files after recovery ========================================================== 18:20:42 (1713392442) [ 4835.546127] LustreError: 26644:0:(osd_handler.c:694:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 4835.550781] LustreError: 26644:0:(osd_handler.c:694:osd_ro()) Skipped 10 previous similar messages [ 4835.860101] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4846.733675] LustreError: 166-1: MGC192.168.204.128@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4846.739601] LustreError: Skipped 7 previous similar messages [ 4846.742769] Lustre: Evicted from MGS (at 192.168.204.128@tcp) after server handle changed from 0x975df49111da5816 to 0x975df49111db0055 [ 4846.748511] Lustre: Skipped 7 previous similar messages [ 4846.952651] LustreError: 28084:0:(mdt_handler.c:7428:mdt_iocontrol()) lustre-MDT0000: Aborting recovery for device [ 4846.955746] LustreError: 28084:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 4846.958775] Lustre: 28307:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4846.963727] Lustre: 28307:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 4846.967841] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 4847.000098] Lustre: lustre-OST0001: deleting orphan objects from 0x0:9379 to 0x0:9921 [ 4847.000240] Lustre: lustre-OST0000: deleting orphan objects from 0x0:12492 to 0x0:13057 [ 4848.091815] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4872.482105] Lustre: DEBUG MARKER: == replay-single test 102a: check resend (request lost) with multiple modify RPCs in flight ========================================================== 18:21:20 (1713392480) [ 4872.985026] Lustre: *** cfs_fail_loc=159, val=0*** [ 4872.988926] Lustre: Skipped 3 previous similar messages [ 4879.985941] Lustre: lustre-MDT0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnecting [ 4879.990918] Lustre: Skipped 2 previous similar messages [ 4882.353589] Lustre: DEBUG MARKER: == replay-single test 102b: check resend (reply lost) with multiple modify RPCs in flight ========================================================== 18:21:29 (1713392489) [ 4882.860007] Lustre: *** cfs_fail_loc=15a, val=0*** [ 4882.864123] Lustre: Skipped 3 previous similar messages [ 4889.862291] Lustre: 29116:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff880094492f80 x1796617174213056/t408021897208(0) o36->6c1b5c46-4c62-45cb-8458-3df56e45cae3@192.168.204.28@tcp:33/0 lens 488/3152 e 0 to 0 dl 1713392503 ref 1 fl Interpret:/2/0 rc 0/0 job:'chmod.0' [ 4889.874995] Lustre: 29116:0:(mdt_recovery.c:200:mdt_req_from_lrd()) Skipped 6 previous similar messages [ 4892.171792] Lustre: DEBUG MARKER: == replay-single test 102c: check replay w/o reconstruction with multiple mod RPCs in flight ========================================================== 18:21:39 (1713392499) [ 4893.690516] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4911.550154] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4918.915179] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13569 to 0x0:13601 [ 4918.915376] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10432 to 0x0:10465 [ 4920.817685] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4921.377851] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4924.887764] Lustre: DEBUG MARKER: == replay-single test 102d: check replay [ 4945.692336] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4949.402079] Lustre: 2593:0:(mdt_recovery.c:200:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a77bc740 x1796617174237440/t412316860462(0) o36->6c1b5c46-4c62-45cb-8458-3df56e45cae3@192.168.204.28@tcp:93/0 lens 488/3152 e 0 to 0 dl 1713392563 ref 1 fl Interpret:/2/0 rc 0/0 job:'chmod.0' [ 4949.409297] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13605 to 0x0:13633 [ 4949.410486] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10470 to 0x0:10497 [ 4949.413339] Lustre: 2593:0:(mdt_recovery.c:200:mdt_req_from_lrd()) Skipped 6 previous similar messages [ 4951.262049] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4951.807928] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4955.359406] Lustre: DEBUG MARKER: == replay-single test 103: Check otr_next_id overflow ==== 18:22:42 (1713392562) [ 4975.040284] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 4983.425188] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10513 to 0x0:10529 [ 4983.425198] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13649 to 0x0:13665 [ 4985.146265] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4985.705220] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4989.235804] Lustre: DEBUG MARKER: == replay-single test 110a: DNE: create striped dir, fail MDT1 ========================================================== 18:23:16 (1713392596) [ 4989.759799] Lustre: DEBUG MARKER: SKIP: replay-single test_110a needs >= 2 MDTs [ 4990.373743] Lustre: DEBUG MARKER: == replay-single test 110b: DNE: create striped dir, fail MDT1 and client ========================================================== 18:23:17 (1713392597) [ 4990.884573] Lustre: DEBUG MARKER: SKIP: replay-single test_110b needs >= 2 MDTs [ 4991.498247] Lustre: DEBUG MARKER: == replay-single test 110c: DNE: create striped dir, fail MDT2 ========================================================== 18:23:19 (1713392599) [ 4992.024729] Lustre: DEBUG MARKER: SKIP: replay-single test_110c needs >= 2 MDTs [ 4992.632666] Lustre: DEBUG MARKER: == replay-single test 110d: DNE: create striped dir, fail MDT2 and client ========================================================== 18:23:20 (1713392600) [ 4993.154118] Lustre: DEBUG MARKER: SKIP: replay-single test_110d needs >= 2 MDTs [ 4993.762653] Lustre: DEBUG MARKER: == replay-single test 110e: DNE: create striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 18:23:21 (1713392601) [ 4994.297032] Lustre: DEBUG MARKER: SKIP: replay-single test_110e needs >= 2 MDTs [ 4994.883939] Lustre: DEBUG MARKER: SKIP: replay-single test_110f skipping excluded test 110f [ 4995.465329] Lustre: DEBUG MARKER: == replay-single test 110g: DNE: create striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 18:23:23 (1713392603) [ 4995.983445] Lustre: DEBUG MARKER: SKIP: replay-single test_110g needs >= 2 MDTs [ 4996.599127] Lustre: DEBUG MARKER: == replay-single test 111a: DNE: unlink striped dir, fail MDT1 ========================================================== 18:23:24 (1713392604) [ 4997.137391] Lustre: DEBUG MARKER: SKIP: replay-single test_111a needs >= 2 MDTs [ 4997.768621] Lustre: DEBUG MARKER: == replay-single test 111b: DNE: unlink striped dir, fail MDT2 ========================================================== 18:23:25 (1713392605) [ 4998.303913] Lustre: DEBUG MARKER: SKIP: replay-single test_111b needs >= 2 MDTs [ 4998.886395] Lustre: DEBUG MARKER: == replay-single test 111c: DNE: unlink striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 18:23:26 (1713392606) [ 4999.404413] Lustre: DEBUG MARKER: SKIP: replay-single test_111c needs >= 2 MDTs [ 5000.018288] Lustre: DEBUG MARKER: == replay-single test 111d: DNE: unlink striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 18:23:27 (1713392607) [ 5000.545481] Lustre: DEBUG MARKER: SKIP: replay-single test_111d needs >= 2 MDTs [ 5001.178076] Lustre: DEBUG MARKER: == replay-single test 111e: DNE: unlink striped dir, uncommit on MDT2, fail MDT1/MDT2 ========================================================== 18:23:28 (1713392608) [ 5001.708565] Lustre: DEBUG MARKER: SKIP: replay-single test_111e needs >= 2 MDTs [ 5002.339720] Lustre: DEBUG MARKER: == replay-single test 111f: DNE: unlink striped dir, uncommit on MDT1, fail MDT1/MDT2 ========================================================== 18:23:29 (1713392609) [ 5002.876151] Lustre: DEBUG MARKER: SKIP: replay-single test_111f needs >= 2 MDTs [ 5003.497779] Lustre: DEBUG MARKER: == replay-single test 111g: DNE: unlink striped dir, fail MDT1/MDT2 ========================================================== 18:23:31 (1713392611) [ 5004.018048] Lustre: DEBUG MARKER: SKIP: replay-single test_111g needs >= 2 MDTs [ 5004.631466] Lustre: DEBUG MARKER: == replay-single test 112a: DNE: cross MDT rename, fail MDT1 ========================================================== 18:23:32 (1713392612) [ 5005.152584] Lustre: DEBUG MARKER: SKIP: replay-single test_112a needs >= 4 MDTs [ 5005.766781] Lustre: DEBUG MARKER: == replay-single test 112b: DNE: cross MDT rename, fail MDT2 ========================================================== 18:23:33 (1713392613) [ 5006.301338] Lustre: DEBUG MARKER: SKIP: replay-single test_112b needs >= 4 MDTs [ 5006.919019] Lustre: DEBUG MARKER: == replay-single test 112c: DNE: cross MDT rename, fail MDT3 ========================================================== 18:23:34 (1713392614) [ 5007.437299] Lustre: DEBUG MARKER: SKIP: replay-single test_112c needs >= 4 MDTs [ 5008.074146] Lustre: DEBUG MARKER: == replay-single test 112d: DNE: cross MDT rename, fail MDT4 ========================================================== 18:23:35 (1713392615) [ 5008.629871] Lustre: DEBUG MARKER: SKIP: replay-single test_112d needs >= 4 MDTs [ 5009.277687] Lustre: DEBUG MARKER: == replay-single test 112e: DNE: cross MDT rename, fail MDT1 and MDT2 ========================================================== 18:23:36 (1713392616) [ 5009.812911] Lustre: DEBUG MARKER: SKIP: replay-single test_112e needs >= 4 MDTs [ 5010.444232] Lustre: DEBUG MARKER: == replay-single test 112f: DNE: cross MDT rename, fail MDT1 and MDT3 ========================================================== 18:23:37 (1713392617) [ 5010.968343] Lustre: DEBUG MARKER: SKIP: replay-single test_112f needs >= 4 MDTs [ 5011.575618] Lustre: DEBUG MARKER: == replay-single test 112g: DNE: cross MDT rename, fail MDT1 and MDT4 ========================================================== 18:23:39 (1713392619) [ 5012.101988] Lustre: DEBUG MARKER: SKIP: replay-single test_112g needs >= 4 MDTs [ 5012.710921] Lustre: DEBUG MARKER: == replay-single test 112h: DNE: cross MDT rename, fail MDT2 and MDT3 ========================================================== 18:23:40 (1713392620) [ 5013.231767] Lustre: DEBUG MARKER: SKIP: replay-single test_112h needs >= 4 MDTs [ 5013.844373] Lustre: DEBUG MARKER: == replay-single test 112i: DNE: cross MDT rename, fail MDT2 and MDT4 ========================================================== 18:23:41 (1713392621) [ 5014.362441] Lustre: DEBUG MARKER: SKIP: replay-single test_112i needs >= 4 MDTs [ 5014.973342] Lustre: DEBUG MARKER: == replay-single test 112j: DNE: cross MDT rename, fail MDT3 and MDT4 ========================================================== 18:23:42 (1713392622) [ 5015.500225] Lustre: DEBUG MARKER: SKIP: replay-single test_112j needs >= 4 MDTs [ 5016.108782] Lustre: DEBUG MARKER: == replay-single test 112k: DNE: cross MDT rename, fail MDT1,MDT2,MDT3 ========================================================== 18:23:43 (1713392623) [ 5016.624225] Lustre: DEBUG MARKER: SKIP: replay-single test_112k needs >= 4 MDTs [ 5017.222545] Lustre: DEBUG MARKER: == replay-single test 112l: DNE: cross MDT rename, fail MDT1,MDT2,MDT4 ========================================================== 18:23:44 (1713392624) [ 5017.636225] Lustre: DEBUG MARKER: SKIP: replay-single test_112l needs >= 4 MDTs [ 5018.177031] Lustre: DEBUG MARKER: == replay-single test 112m: DNE: cross MDT rename, fail MDT1,MDT3,MDT4 ========================================================== 18:23:45 (1713392625) [ 5018.683487] Lustre: DEBUG MARKER: SKIP: replay-single test_112m needs >= 4 MDTs [ 5019.282226] Lustre: DEBUG MARKER: == replay-single test 112n: DNE: cross MDT rename, fail MDT2,MDT3,MDT4 ========================================================== 18:23:46 (1713392626) [ 5019.786407] Lustre: DEBUG MARKER: SKIP: replay-single test_112n needs >= 4 MDTs [ 5020.379797] Lustre: DEBUG MARKER: == replay-single test 115: failover for create/unlink striped directory ========================================================== 18:23:47 (1713392627) [ 5020.856256] Lustre: DEBUG MARKER: SKIP: replay-single test_115 needs >= 2 MDTs [ 5021.426642] Lustre: DEBUG MARKER: == replay-single test 116a: large update log master MDT recovery ========================================================== 18:23:49 (1713392629) [ 5021.921668] Lustre: DEBUG MARKER: SKIP: replay-single test_116a needs >= 2 MDTs [ 5022.499282] Lustre: DEBUG MARKER: == replay-single test 116b: large update log slave MDT recovery ========================================================== 18:23:50 (1713392630) [ 5023.020027] Lustre: DEBUG MARKER: SKIP: replay-single test_116b needs >= 2 MDTs [ 5023.638033] Lustre: DEBUG MARKER: == replay-single test 117: DNE: cross MDT unlink, fail MDT1 and MDT2 ========================================================== 18:23:51 (1713392631) [ 5024.159577] Lustre: DEBUG MARKER: SKIP: replay-single test_117 needs >= 4 MDTs [ 5024.779643] Lustre: DEBUG MARKER: == replay-single test 118: invalidate osp update will not cause update log corruption ========================================================== 18:23:52 (1713392632) [ 5025.306692] Lustre: DEBUG MARKER: SKIP: replay-single test_118 needs >= 2 MDTs [ 5025.930710] Lustre: DEBUG MARKER: == replay-single test 119: timeout of normal replay does not cause DNE replay fails ========================================================== 18:23:53 (1713392633) [ 5026.470008] Lustre: DEBUG MARKER: SKIP: replay-single test_119 needs >= 2 MDTs [ 5027.094115] Lustre: DEBUG MARKER: == replay-single test 120: DNE fail abort should stop both normal and DNE replay ========================================================== 18:23:54 (1713392634) [ 5027.620912] Lustre: DEBUG MARKER: SKIP: replay-single test_120 needs >= 2 MDTs [ 5028.272606] Lustre: DEBUG MARKER: == replay-single test 121: lock replay timed out and race ========================================================== 18:23:55 (1713392635) [ 5033.072298] Lustre: *** cfs_fail_loc=721, val=0*** [ 5034.169214] Lustre: *** cfs_fail_loc=721, val=0*** [ 5034.170474] Lustre: Skipped 32 previous similar messages [ 5034.268023] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5036.239565] Lustre: *** cfs_fail_loc=721, val=0*** [ 5036.242133] Lustre: Skipped 40 previous similar messages [ 5040.492113] Lustre: *** cfs_fail_loc=721, val=0*** [ 5040.493140] Lustre: Skipped 2 previous similar messages [ 5040.494039] Lustre: lustre-MDT0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 1 clients in recovery for 0:58 [ 5040.496484] Lustre: Skipped 3 previous similar messages [ 5040.500143] Lustre: *** cfs_fail_loc=721, val=0*** [ 5040.501131] Lustre: Skipped 14 previous similar messages [ 5040.502196] Lustre: 11976:0:(tgt_handler.c:687:process_req_last_xid()) @@@ unexpected xid=6620388784240 != exp_last_xid=662038878453f, rc = -71 req@ffff8800944e6880 x1796617174270528/t0(0) o101->6c1b5c46-4c62-45cb-8458-3df56e45cae3@192.168.204.28@tcp:0/0 lens 328/0 e 0 to 0 dl 1713392647 ref 1 fl Interpret:/40/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' [ 5040.507325] Lustre: 11976:0:(service.c:2333:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (6/1s); client may timeout req@ffff8800944e6880 x1796617174270528/t0(0) o101->6c1b5c46-4c62-45cb-8458-3df56e45cae3@192.168.204.28@tcp:0/0 lens 328/224 e 0 to 0 dl 1713392647 ref 1 fl Complete:/40/0 rc -71/-71 job:'ldlm_lock_repla.0' [ 5041.520947] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10513 to 0x0:10561 [ 5041.520948] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13667 to 0x0:13697 [ 5043.342910] Lustre: DEBUG MARKER: == replay-single test 130a: DoM file create (setstripe) replay ========================================================== 18:24:11 (1713392651) [ 5044.233179] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5060.318166] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5061.570702] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13667 to 0x0:13729 [ 5061.570741] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10513 to 0x0:10593 [ 5063.893810] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5064.392846] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5067.947973] Lustre: DEBUG MARKER: == replay-single test 130b: DoM file create (inherited) replay ========================================================== 18:24:35 (1713392675) [ 5069.238389] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5089.655421] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5095.609007] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13667 to 0x0:13761 [ 5097.420861] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5097.958225] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5101.543069] Lustre: DEBUG MARKER: == replay-single test 131a: DoM file write lock replay === 18:25:09 (1713392709) [ 5102.940620] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5119.514455] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5129.612643] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10513 to 0x0:10625 [ 5129.612645] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13667 to 0x0:13793 [ 5130.743145] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5131.063272] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5134.554585] Lustre: DEBUG MARKER: SKIP: replay-single test_131b skipping excluded test 131b [ 5135.031895] Lustre: DEBUG MARKER: == replay-single test 132a: PFL new component instantiate replay ========================================================== 18:25:42 (1713392742) [ 5135.936271] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5152.647954] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13796 to 0x0:13825 [ 5152.647964] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10627 to 0x0:10657 [ 5153.177040] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5156.341246] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5156.711749] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5159.362776] Lustre: DEBUG MARKER: == replay-single test 133: check resend of ongoing requests for lwp during failover ========================================================== 18:26:07 (1713392767) [ 5159.709014] Lustre: DEBUG MARKER: SKIP: replay-single test_133 needs >= 2 MDTs [ 5160.124599] Lustre: DEBUG MARKER: == replay-single test 134: replay creation of a file created in a pool ========================================================== 18:26:07 (1713392767) [ 5166.271997] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5186.449554] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5191.708251] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13796 to 0x0:13857 [ 5191.708278] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10660 to 0x0:10689 [ 5193.682885] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5194.313382] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5204.136115] Lustre: DEBUG MARKER: == replay-single test 135: Server failure in lock replay phase ========================================================== 18:26:51 (1713392811) [ 5205.522564] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 5206.673998] LustreError: 137-5: lustre-OST0000_UUID: not available for connect from 192.168.204.28@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5206.682988] LustreError: Skipped 38 previous similar messages [ 5206.742398] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 5218.680174] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing load_module ../libcfs/libcfs/libcfs [ 5222.119354] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5222.713042] Lustre: *** cfs_fail_loc=32d, val=0*** [ 5229.716587] Lustre: lustre-OST0000: Client 6c1b5c46-4c62-45cb-8458-3df56e45cae3 (at 192.168.204.28@tcp) reconnected, waiting for 2 clients in recovery for 0:57 [ 5243.501268] Lustre: Failing over lustre-OST0000 [ 5243.503328] Lustre: Skipped 13 previous similar messages [ 5243.506847] LustreError: 29050:0:(ldlm_lib.c:2883:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 5243.511425] Lustre: 28406:0:(ldlm_lib.c:2289:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5243.516058] Lustre: 28406:0:(ldlm_lib.c:2289:target_recovery_overseer()) Skipped 2 previous similar messages [ 5243.520724] Lustre: lustre-OST0000: Recovery over after 0:22, of 2 clients 0 recovered and 2 were evicted. [ 5243.524793] Lustre: Skipped 12 previous similar messages [ 5243.528521] Lustre: 28406:0:(ofd_obd.c:554:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 5243.535724] LustreError: 29050:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items [ 5243.539638] LustreError: 29050:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 75 previous similar messages [ 5243.553702] Lustre: server umount lustre-OST0000 complete [ 5243.556140] Lustre: Skipped 13 previous similar messages [ 5255.821791] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing load_module ../libcfs/libcfs/libcfs [ 5257.702523] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 5257.707007] Lustre: Skipped 13 previous similar messages [ 5257.711873] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 5257.718490] Lustre: Skipped 15 previous similar messages [ 5258.767910] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5258.772675] Lustre: Skipped 12 previous similar messages [ 5258.931249] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.204.128@tcp (at 0@lo) [ 5258.938080] Lustre: Skipped 31 previous similar messages [ 5258.938828] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13878 to 0x0:13889 [ 5259.057287] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5262.694965] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5268.785678] Lustre: lustre-OST0001: Not available for connect from 192.168.204.28@tcp (stopping) [ 5268.789493] Lustre: Skipped 1 previous similar message [ 5274.086698] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 5274.097095] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13878 to 0x0:13921 [ 5274.183028] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5278.233823] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5278.651747] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 5278.663111] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10660 to 0x0:10721 [ 5282.699420] Lustre: DEBUG MARKER: == replay-single test complete, duration 5205 sec ======== 18:28:10 (1713392890) [ 5293.853955] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713392894/real 1713392894] req@ffff880134400e40 x1796617158070464/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713392901 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u8:1.0' [ 5293.859369] Lustre: 2850:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 54 previous similar messages [ 5301.385108] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 [ 5301.897084] Lustre: lustre-OST0001: deleting orphan objects from 0x0:10660 to 0x0:10753 [ 5301.901460] Lustre: lustre-OST0000: deleting orphan objects from 0x0:13923 to 0x0:13953 [ 5305.280304] Lustre: DEBUG MARKER: oleg428-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5305.833882] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5310.038979] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5315.673865] LustreError: 11723:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713392923 with bad export cookie 10907142776468353186 [ 5315.680000] LustreError: 11723:0:(ldlm_lockd.c:2521:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 5321.125073] Lustre: DEBUG MARKER: oleg428-server.virtnet: executing unload_modules_local [ 5321.881289] Key type lgssc unregistered [ 5321.967347] LNet: 5119:0:(lib-ptl.c:958:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5321.971737] LNet: Removed LNI 192.168.204.128@tcp