[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 3.0.0 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f53f0-0x000f53ff] mapped at [ffffffffff2003f0] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5200 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1d87 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1c23 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01BE3 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1c97 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1d27 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1d5f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 388201536 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.496818] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.499277] pid_max: default: 32768 minimum: 301 [ 0.501757] Security Framework initialized [ 0.502780] SELinux: Initializing. [ 0.505165] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.509652] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.512693] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.514191] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.516882] Initializing cgroup subsys memory [ 0.518241] Initializing cgroup subsys devices [ 0.519734] Initializing cgroup subsys freezer [ 0.521248] Initializing cgroup subsys net_cls [ 0.522863] Initializing cgroup subsys blkio [ 0.524374] Initializing cgroup subsys perf_event [ 0.525857] Initializing cgroup subsys hugetlb [ 0.527149] Initializing cgroup subsys pids [ 0.528072] Initializing cgroup subsys net_prio [ 0.529653] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.533233] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.534729] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.536949] tlb_flushall_shift: 6 [ 0.537989] FEATURE SPEC_CTRL Present [ 0.539631] FEATURE IBPB_SUPPORT Present [ 0.541012] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.543146] Spectre V2 : Vulnerable [ 0.544216] Speculative Store Bypass: Vulnerable [ 0.546598] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.557940] ACPI: Core revision 20130517 [ 0.562597] ACPI: All ACPI Tables successfully acquired [ 0.565193] ftrace: allocating 30294 entries in 119 pages [ 0.628601] Enabling x2apic [ 0.630026] Enabled x2apic [ 0.631211] Switched APIC routing to physical x2apic. [ 0.634683] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.636597] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.640162] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.644356] ... version: 2 [ 0.645597] ... bit width: 48 [ 0.647599] ... generic registers: 4 [ 0.648884] ... value mask: 0000ffffffffffff [ 0.650604] ... max period: 00007fffffffffff [ 0.652732] ... fixed-purpose events: 3 [ 0.654152] ... event mask: 000000070000000f [ 0.655749] KVM setup paravirtual spinlock [ 0.659987] smpboot: Booting Node 0, Processors #1[ 0.662367] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.667600] KVM setup async PF for cpu 1 [ 0.670405] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.673245] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.677830] KVM setup async PF for cpu 2 [ 0.678422] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.681522] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.683323] Brought up 4 CPUs [ 0.684237] smpboot: Max logical packages: 1 [ 0.685327] KVM setup async PF for cpu 3 [ 0.685334] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.688027] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.692585] devtmpfs: initialized [ 0.694287] x86/mm: Memory block size: 128MB [ 0.698316] EVM: security.selinux [ 0.699363] EVM: security.ima [ 0.700308] EVM: security.capability [ 0.703466] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.705894] NET: Registered protocol family 16 [ 0.707505] cpuidle: using governor haltpoll [ 0.709568] ACPI: bus type PCI registered [ 0.710729] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.712819] PCI: Using configuration type 1 for base access [ 0.714508] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.726040] ACPI: Added _OSI(Module Device) [ 0.727338] ACPI: Added _OSI(Processor Device) [ 0.728646] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.730032] ACPI: Added _OSI(Processor Aggregator Device) [ 0.731599] ACPI: Added _OSI(Linux-Dell-Video) [ 0.736545] ACPI: Interpreter enabled [ 0.737816] ACPI: (supports S0 S3 S4 S5) [ 0.738972] ACPI: Using IOAPIC for interrupt routing [ 0.740517] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.743363] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.750952] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.752865] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.754783] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.756715] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.760594] acpiphp: Slot [2] registered [ 0.761915] acpiphp: Slot [5] registered [ 0.763110] acpiphp: Slot [6] registered [ 0.764533] acpiphp: Slot [7] registered [ 0.765746] acpiphp: Slot [8] registered [ 0.767004] acpiphp: Slot [9] registered [ 0.768196] acpiphp: Slot [10] registered [ 0.769442] acpiphp: Slot [3] registered [ 0.770706] acpiphp: Slot [4] registered [ 0.771913] acpiphp: Slot [11] registered [ 0.773161] acpiphp: Slot [12] registered [ 0.774425] acpiphp: Slot [13] registered [ 0.775743] acpiphp: Slot [14] registered [ 0.776964] acpiphp: Slot [15] registered [ 0.778205] acpiphp: Slot [16] registered [ 0.779703] acpiphp: Slot [17] registered [ 0.781000] acpiphp: Slot [18] registered [ 0.782267] acpiphp: Slot [19] registered [ 0.783506] acpiphp: Slot [20] registered [ 0.784841] acpiphp: Slot [21] registered [ 0.786136] acpiphp: Slot [22] registered [ 0.787453] acpiphp: Slot [23] registered [ 0.789434] acpiphp: Slot [24] registered [ 0.790636] acpiphp: Slot [25] registered [ 0.791952] acpiphp: Slot [26] registered [ 0.793117] acpiphp: Slot [27] registered [ 0.794316] acpiphp: Slot [28] registered [ 0.795502] acpiphp: Slot [29] registered [ 0.796623] acpiphp: Slot [30] registered [ 0.797882] acpiphp: Slot [31] registered [ 0.800788] PCI host bridge to bus 0000:00 [ 0.802066] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.803988] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.806083] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.808134] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.810204] pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38007fffffff window] [ 0.812389] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.823795] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.827522] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.829573] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.831507] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.834610] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.836967] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 1.109251] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 1.111818] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 1.113628] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 1.115441] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 1.117995] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 1.119959] vgaarb: loaded [ 1.120550] SCSI subsystem initialized [ 1.121464] ACPI: bus type USB registered [ 1.122271] usbcore: registered new interface driver usbfs [ 1.124069] usbcore: registered new interface driver hub [ 1.125751] usbcore: registered new device driver usb [ 1.129333] PCI: Using ACPI for IRQ routing [ 1.131263] NetLabel: Initializing [ 1.132076] NetLabel: domain hash size = 128 [ 1.133230] NetLabel: protocols = UNLABELED CIPSOv4 [ 1.134851] NetLabel: unlabeled traffic allowed by default [ 1.137052] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 1.138860] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 1.143459] amd_nb: Cannot enumerate AMD northbridges [ 1.147135] Switched to clocksource kvm-clock [ 1.164817] pnp: PnP ACPI init [ 1.165966] ACPI: bus type PNP registered [ 1.168026] pnp: PnP ACPI: found 6 devices [ 1.169186] ACPI: bus type PNP unregistered [ 1.185800] NET: Registered protocol family 2 [ 1.187884] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.190613] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.193461] TCP: Hash tables configured (established 32768 bind 32768) [ 1.195816] TCP: reno registered [ 1.196868] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.198341] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.200137] NET: Registered protocol family 1 [ 1.202405] RPC: Registered named UNIX socket transport module. [ 1.206368] RPC: Registered udp transport module. [ 1.208896] RPC: Registered tcp transport module. [ 1.210264] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.212310] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.214195] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.215895] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.218030] Unpacking initramfs... [ 2.601817] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 2.605356] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.607466] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 2.611072] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.613075] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.614171] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.615869] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.621272] cryptomgr_test (51) used greatest stack depth: 14480 bytes left [ 2.621915] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.621960] Initialise system trusted keyring [ 2.654773] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.656515] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.662700] zpool: loaded [ 2.663519] zbud: loaded [ 2.664645] VFS: Disk quotas dquot_6.6.0 [ 2.665755] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.668934] NFS: Registering the id_resolver key type [ 2.671148] Key type id_resolver registered [ 2.672081] Key type id_legacy registered [ 2.673072] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.676204] Key type big_key registered [ 2.684528] cryptomgr_test (57) used greatest stack depth: 14048 bytes left [ 2.687949] cryptomgr_test (60) used greatest stack depth: 13968 bytes left [ 2.688583] NET: Registered protocol family 38 [ 2.688595] Key type asymmetric registered [ 2.688598] Asymmetric key parser 'x509' registered [ 2.688735] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.689364] io scheduler noop registered [ 2.689369] io scheduler deadline registered (default) [ 2.689516] io scheduler cfq registered [ 2.689522] io scheduler mq-deadline registered [ 2.689527] io scheduler kyber registered [ 2.691838] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.691850] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.706647] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.708704] ACPI: Power Button [PWRF] [ 2.710134] GHES: HEST is not enabled! [ 2.765558] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.823261] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.931390] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 2.985267] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 3.116691] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 3.145110] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 3.174416] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 3.179402] Non-volatile memory driver v1.3 [ 3.181171] Linux agpgart interface v0.103 [ 3.183225] crash memory driver: version 1.1 [ 3.185426] nbd: registered device at major 43 [ 3.198047] virtio_blk virtio1: [vda] 67352 512-byte logical blocks (34.4 MB/32.8 MiB) [ 3.214157] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 3.229044] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 3.242898] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 3.264422] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.281257] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.291368] rdac: device handler registered [ 3.293145] hp_sw: device handler registered [ 3.294427] emc: device handler registered [ 3.295813] libphy: Fixed MDIO Bus: probed [ 3.300429] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.302663] ehci-pci: EHCI PCI platform driver [ 3.304072] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.305800] ohci-pci: OHCI PCI platform driver [ 3.306908] uhci_hcd: USB Universal Host Controller Interface driver [ 3.308916] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 3.311672] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 3.313162] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 3.315092] mousedev: PS/2 mouse device common for all mice [ 3.318794] rtc_cmos 00:05: RTC can wake from S4 [ 3.320699] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 3.323199] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 3.323544] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 3.327609] hidraw: raw HID events driver (C) Jiri Kosina [ 3.330407] usbcore: registered new interface driver usbhid [ 3.333977] usbhid: USB HID core driver [ 3.335833] drop_monitor: Initializing network drop monitor service [ 3.339606] Netfilter messages via NETLINK v0.30. [ 3.341792] TCP: cubic registered [ 3.342652] Initializing XFRM netlink socket [ 3.345941] NET: Registered protocol family 10 [ 3.349048] NET: Registered protocol family 17 [ 3.351064] Key type dns_resolver registered [ 3.353386] mce: Using 10 MCE banks [ 3.355153] Loading compiled-in X.509 certificates [ 3.357515] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 3.360379] registered taskstats version 1 [ 3.372703] modprobe (71) used greatest stack depth: 13456 bytes left [ 3.379182] Key type trusted registered [ 3.385277] Key type encrypted registered [ 3.386257] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 3.390299] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 3.393958] rtc_cmos 00:05: setting system clock to 2024-04-16 19:39:15 UTC (1713296355) [ 3.397694] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 3.400388] Write protecting the kernel read-only data: 12288k [ 3.402680] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 3.404976] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 3.415459] random: systemd: uninitialized urandom read (16 bytes read) [ 3.419421] random: systemd: uninitialized urandom read (16 bytes read) [ 3.421314] random: systemd: uninitialized urandom read (16 bytes read) [ 3.425312] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 3.431204] systemd[1]: Detected virtualization kvm. [ 3.433158] systemd[1]: Detected architecture x86-64. [ 3.435141] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 3.440877] systemd[1]: No hostname configured. [ 3.442442] systemd[1]: Set hostname to . [ 3.444303] random: systemd: uninitialized urandom read (16 bytes read) [ 3.446519] systemd[1]: Initializing machine ID from random generator. [ 3.491607] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 3.494531] random: systemd: uninitialized urandom read (16 bytes read) [ 3.496827] random: systemd: uninitialized urandom read (16 bytes read) [ 3.499194] random: systemd: uninitialized urandom read (16 bytes read) [ 3.501250] random: systemd: uninitialized urandom read (16 bytes read) [ 3.504195] random: systemd: uninitialized urandom read (16 bytes read) [ 3.506201] random: systemd: uninitialized urandom read (16 bytes read) [ 3.515331] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 3.517849] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 3.521350] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 3.524911] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 3.528471] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 3.530846] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 3.534460] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 3.537559] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 3.549561] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 3.557192] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 3.562011] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 3.568508] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 3.571514] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 3.575612] systemd[1]: Starting Journal Service... Starting Journal Service... [ 3.580453] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 3.585148] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 3.592718] systemd[1]: Started Create list of required static device nodes for the current kernel. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 3.600430] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 3.607091] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables... [ 3.612976] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev... [ 3.619115] tsc: Refined TSC clocksource calibration: 2399.954 MHz [ 3.619549] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. [ OK ] Started Apply Kernel Variables. [ OK ] Started Create Static Device Nodes in /dev. [ 3.752216] random: fast init done [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ 4.066179] dracut-pre-trig (248) used greatest stack depth: 12992 bytes left [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ OK ] Started udev Coldplug all Devices. Starting dracut initqueue hook... Starting Show Plymouth Boot Screen... [ OK ] Reached target System Initialization. [ 4.202351] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ OK ] Started Show P[ 4.222489] scsi host0: ata_piix lymouth Boot Scr[ 4.224924] scsi host1: ata_piix een. [ OK ] Started[ 4.227704] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 4.230795] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. %G[ 4.344595] ip (345) used greatest stack depth: 12336 bytes left [ 6.426929] dracut-initqueue[275]: RTNETLINK answers: File exists [ 6.666823] dracut-initqueue[275]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. Mounting /sysroot... [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. [ 7.512922] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... [ OK ] Stopped target Timers. Starting Plymouth switch root service... [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target System Initialization. [ OK ] Stopped target Local File Systems. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Swap. [ OK ] Stopped target Slices. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Stopped target Sockets. [ OK ] Stopped target Paths. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Started Cleanup udevd DB. [ OK ] Started Plymouth switch root service. [ OK ] Reached target Switch Root. Starting Switch Root... [ 8.112212] systemd-journald[106]: Received SIGTERM from PID 1 (systemd). [ 8.403612] SELinux: Disabled at runtime. [ 8.508077] ip_tables: (C) 2000-2006 Netfilter Core Team [ 8.513709] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. [ OK ] Created slice system-getty.slice. Starting Create list of required st... nodes for the current kernel... [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd File Systems. [ OK ] Listening on udev Kernel Socket. [ OK ] Listening on Delayed Shutdown Socket. Mounting Debug File System... Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Stopped target Initrd Root File System. [ OK ] Created slice User and Session Slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Listening on udev Control Socket. Starting udev Coldplug all Devices... [ OK ] Created slice system-serial\x2dgetty.slice. Mounting Huge Pages File System... Starting Set Up Additional Binary Formats... Starting Load Kernel Modules... Starting Remount Root and Kernel File Systems... [ OK ] Reached target rpc_pipefs.target. [ OK ] Reached target Slices. Mounting POSIX Message Queue File System... [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Mounted Huge Pages File System. [ OK ] Mounted Debug File System. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ OK ] Started Load Kernel Modules. Mounting Arbitrary Executable File Formats File System... Starting Apply Kernel Variables... Starting Create Static Device Nodes in /dev... [ OK ] Mounted POSIX Message Queue File System. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Journal Service. [ OK ] Started Apply Kernel Variables. [ OK ] Started udev Coldplug all Devices. [ OK ] Started Set Up Additional Binary Formats. [ OK ] Started Create Static Device Nodes in /dev. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Flush Journal to Persistent Storage... Starting Configure read-only root support... Starting udev Kernel Device Manager... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... [ OK ] Mounted /mnt. [ 9.263332] systemd-journald[567]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 9.457064] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ 9.479374] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ OK ] Found device /dev/ttyS0. [ OK ] Found device /dev/ttyS1. [ 9.548811] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ OK ] Found device /dev/vda. [ 9.611805] AVX version of gcm_enc/dec engaged. [ 9.613816] AES CTR mode by8 optimization enabled Mounting /home/green/gi[ 9.631743] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS t/lustre-release... [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 9.682061] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 9.694746] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ 9.702395] squashfs: version 4.0 (2009/01/31) Phillip Lougher %G[ OK ] Mounted /home/green/git/lustre-release. [ 9.823788] EDAC MC: Ver: 3.0.0 [ 9.846891] EDAC sbridge: Ver: 1.1.2 [ 12.555553] mount.nfs (772) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Mark the need to relabel after reboot... Starting Rebuild Journal Catalog... Starting Preprocess NFS configuration... Starting Tell Plymouth To Write Out Runtime Data... Starting Create Volatile Files and Directories... [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. [ OK ] Started Tell Plymouth To Write Out Runtime Data. Starting Update is Completed... Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Update is Completed. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting Dump dmesg to /var/log/dmesg... Starting GSSAPI Proxy Daemon... [ OK ] Started D-Bus System Message Bus. Starting Network Manager... Starting Login Service... [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Login Service. [ OK ] Started Network Manager. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... Starting Network Manager Wait Online... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Terminate Plymouth Boot Screen... Starting Wait for Plymouth Boot Screen to Quit... [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg130-server login: [ 25.003274] libcfs: loading out-of-tree module taints kernel. [ 25.004815] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 25.030750] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_hostid [ 29.809862] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing load_modules_local [ 30.007185] alg: No test for adler32 (adler32-zlib) [ 30.759078] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 30.894709] Lustre: Lustre: Build Version: 2.15.62_23_gf1c145f [ 31.070441] LNet: Added LNI 192.168.201.130@tcp [8/256/0/180] [ 31.072410] LNet: Accept secure, port 988 [ 32.619178] Key type lgssc registered [ 32.925903] Lustre: Echo OBD driver; http://www.lustre.org/ [ 33.348693] icp: module license 'CDDL' taints kernel. [ 33.349757] Disabling lock debugging due to kernel taint [ 36.054366] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 38.288676] vdc: vdc1 vdc9 [ 40.780393] vde: vde1 vde9 [ 43.697366] vdf: vdf1 vdf9 [ 47.881635] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing load_modules_local [ 49.712989] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 50.821129] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 50.882542] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 50.914255] Lustre: lustre-MDT0000: new disk, initializing [ 50.979903] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 50.996711] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 51.061523] mount.lustre (6725) used greatest stack depth: 10000 bytes left [ 51.745271] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 52.108630] random: crng init done [ 54.855962] Lustre: lustre-OST0000: new disk, initializing [ 54.857408] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 54.859511] Lustre: Skipped 1 previous similar message [ 54.874391] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 56.121396] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 58.856163] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:0:ost [ 58.859429] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:0:ost] [ 58.887705] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x240000400 [ 59.132137] Lustre: lustre-OST0001: new disk, initializing [ 59.133393] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 59.148819] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 60.364500] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 65.114409] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 67.897732] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:1:ost [ 67.899747] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:1:ost] [ 67.922940] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x280000400 [ 71.244509] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 76.879352] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing check_logdir /tmp/testlogs/ [ 77.696738] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing yml_node [ 78.642795] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 79.294566] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 80.664192] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 81.764384] Lustre: DEBUG MARKER: -----============= acceptance-small: replay-single ============----- Tue Apr 16 15:40:34 EDT 2024 [ 84.559102] Lustre: DEBUG MARKER: excepting tests: 110f 131b 59 36 [ 85.239740] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing check_config_client /mnt/lustre [ 89.345837] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 90.264849] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 90.917058] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 92.526308] Lustre: DEBUG MARKER: == replay-single test 0a: empty replay =================== 15:40:45 (1713296445) [ 93.395173] LustreError: 15381:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 93.654691] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 94.195093] Lustre: Failing over lustre-MDT0000 [ 94.327935] Lustre: server umount lustre-MDT0000 complete [ 105.966613] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 106.071980] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 106.097226] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 106.867066] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 108.271275] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 108.287182] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 109.191392] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 109.579854] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 111.100710] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 113.825654] Lustre: DEBUG MARKER: == replay-single test 0b: ensure object created after recover exists. (3284) ========================================================== 15:41:06 (1713296466) [ 114.097103] Lustre: 3026:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296450/real 1713296450] req@ffff88008de49f80 x1796521468116736/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296466 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 114.357540] Lustre: Failing over lustre-OST0000 [ 114.367576] Lustre: server umount lustre-OST0000 complete [ 116.091417] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 116.094149] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 116.098856] Lustre: Skipped 1 previous similar message [ 116.100581] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 118.276487] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.30@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 119.107131] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296455/real 1713296455] req@ffff88008de4aa00 x1796521468116928/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296471 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 119.115689] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 121.115547] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 123.277919] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.30@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 125.939996] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 127.099553] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 127.234293] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 127.666907] Lustre: lustre-OST0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 127.666978] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.201.130@tcp (at 0@lo) [ 127.666980] Lustre: Skipped 1 previous similar message [ 129.344393] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 129.709090] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 133.864532] Lustre: DEBUG MARKER: == replay-single test 0c: check replay-barrier =========== 15:41:26 (1713296486) [ 134.617038] LustreError: 19908:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 134.843045] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 135.366333] Lustre: Failing over lustre-MDT0000 [ 135.479534] Lustre: server umount lustre-MDT0000 complete [ 147.091092] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 147.198568] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 147.203996] Lustre: Skipped 1 previous similar message [ 147.254500] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 148.016423] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 148.566246] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 148.568475] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 152.211164] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296488/real 1713296488] req@ffff88009344aa00 x1796521468126336/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296504 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 152.220437] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 153.583146] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:54 [ 157.211726] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296493/real 1713296493] req@ffff88009937df80 x1796521468126464/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296509 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 157.223030] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 158.590111] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:49 [ 162.227062] Lustre: 3026:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296498/real 1713296498] req@ffff88009937ea00 x1796521468126656/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296514 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 162.236131] Lustre: 3026:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 163.597929] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:44 [ 168.605805] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:39 [ 178.621902] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:29 [ 178.626939] Lustre: Skipped 1 previous similar message [ 198.653870] Lustre: lustre-MDT0000: Denying connection for new client 66179849-f2f3-4c0a-a4df-e0d39f44069a (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:09 [ 198.659022] Lustre: Skipped 3 previous similar messages [ 207.668627] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 207.671306] Lustre: 20970:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 06ad4ec0-7427-4618-b154-a2a24b020af1@ [ 207.675774] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 207.690655] Lustre: lustre-MDT0000: Recovery over after 1:00, of 1 clients 0 recovered and 1 was evicted. [ 207.709877] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:44 to 0x240000400:65) [ 207.709939] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:44 to 0x280000400:65) [ 213.017127] Lustre: DEBUG MARKER: == replay-single test 0d: expired recovery with no clients ========================================================== 15:42:45 (1713296565) [ 213.830467] LustreError: 22498:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 214.076969] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 214.626722] Lustre: Failing over lustre-MDT0000 [ 214.746445] Lustre: server umount lustre-MDT0000 complete [ 226.518794] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 226.636924] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 226.644277] Lustre: Skipped 1 previous similar message [ 226.676525] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 227.528608] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 228.165426] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 231.660399] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 231.662902] Lustre: Skipped 1 previous similar message [ 233.181979] Lustre: lustre-MDT0000: Denying connection for new client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:54 [ 233.187626] Lustre: Skipped 2 previous similar messages [ 233.641115] Lustre: 3026:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296569/real 1713296569] req@ffff880131396680 x1796521468137728/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296585 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 287.668157] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 287.670788] Lustre: 23446:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 66179849-f2f3-4c0a-a4df-e0d39f44069a@ [ 287.674385] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 287.684348] Lustre: lustre-MDT0000: Recovery over after 1:00, of 1 clients 0 recovered and 1 was evicted. [ 287.700836] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:44 to 0x240000400:97) [ 287.700845] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:44 to 0x280000400:97) [ 292.651673] Lustre: DEBUG MARKER: == replay-single test 1: simple create =================== 15:44:05 (1713296645) [ 293.425076] LustreError: 24944:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 293.677843] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 294.222855] Lustre: Failing over lustre-MDT0000 [ 294.350731] Lustre: server umount lustre-MDT0000 complete [ 306.051554] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 306.174054] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 306.206599] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 307.035836] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 308.335840] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 308.356340] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 308.373979] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:44 to 0x240000400:129) [ 308.373986] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:44 to 0x280000400:129) [ 309.416824] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 309.836293] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 311.196832] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 311.199427] Lustre: Skipped 1 previous similar message [ 312.195100] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296648/real 1713296648] req@ffff88009ba49880 x1796521468147584/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296664 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 312.204500] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 314.306948] Lustre: DEBUG MARKER: == replay-single test 2a: touch ========================== 15:44:26 (1713296666) [ 315.106279] LustreError: 27407:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 315.343283] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 315.881956] Lustre: Failing over lustre-MDT0000 [ 316.014597] Lustre: server umount lustre-MDT0000 complete [ 327.752610] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 327.871783] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 327.879582] Lustre: Skipped 2 previous similar messages [ 328.350973] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 328.394853] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 328.413574] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:44 to 0x280000400:161) [ 328.414174] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:131 to 0x240000400:161) [ 328.796656] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 331.154468] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 331.543198] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 332.908501] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 332.910243] Lustre: Skipped 1 previous similar message [ 335.733777] Lustre: DEBUG MARKER: == replay-single test 2b: touch ========================== 15:44:48 (1713296688) [ 336.495440] LustreError: 29930:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 336.732964] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 337.250710] Lustre: Failing over lustre-MDT0000 [ 337.362797] Lustre: server umount lustre-MDT0000 complete [ 349.016801] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 349.137313] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 349.144271] Lustre: Skipped 1 previous similar message [ 349.988679] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 351.752048] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 351.784026] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 351.801773] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:131 to 0x240000400:193) [ 351.802368] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:163 to 0x280000400:193) [ 352.371511] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 352.791715] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 354.142104] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296690/real 1713296690] req@ffff8800992f2a00 x1796521468158016/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296706 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 354.149463] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 8 previous similar messages [ 354.172238] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.130@tcp (at 0@lo) [ 354.174655] Lustre: Skipped 1 previous similar message [ 357.021015] Lustre: DEBUG MARKER: == replay-single test 2c: setstripe replay =============== 15:45:09 (1713296709) [ 357.842179] LustreError: 32460:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 358.102934] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 358.701944] Lustre: Failing over lustre-MDT0000 [ 358.828524] Lustre: server umount lustre-MDT0000 complete [ 370.557763] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 370.695915] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 370.699172] Lustre: Skipped 2 previous similar messages [ 371.557998] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 373.398627] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:195 to 0x280000400:225) [ 373.398636] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:195 to 0x240000400:225) [ 373.994415] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 374.382904] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 378.749725] Lustre: DEBUG MARKER: == replay-single test 2d: setdirstripe replay ============ 15:45:31 (1713296731) [ 379.770149] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 380.302694] Lustre: Failing over lustre-MDT0000 [ 380.412016] Lustre: server umount lustre-MDT0000 complete [ 392.156096] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 392.161872] Lustre: Skipped 3 previous similar messages [ 392.938648] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 393.454681] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 393.458405] Lustre: Skipped 1 previous similar message [ 393.480124] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 393.482802] Lustre: Skipped 1 previous similar message [ 393.498694] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:195 to 0x280000400:257) [ 393.498700] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:195 to 0x240000400:257) [ 395.193880] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 395.563966] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 397.180321] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 397.182464] Lustre: Skipped 3 previous similar messages [ 399.773971] Lustre: DEBUG MARKER: == replay-single test 2e: O_CREAT|O_EXCL create replay === 15:45:52 (1713296752) [ 400.033156] Lustre: *** cfs_fail_loc=13b, val=315*** [ 400.034895] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 400.037265] LustreError: 3446:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880092601880 x1796521463954496/t38654705666(0) o35->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:184/0 lens 392/456 e 0 to 0 dl 1713296769 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 401.774369] LustreError: 5169:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 401.777659] LustreError: 5169:0:(osd_handler.c:698:osd_ro()) Skipped 1 previous similar message [ 402.021480] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 402.550700] Lustre: Failing over lustre-MDT0000 [ 402.664041] Lustre: server umount lustre-MDT0000 complete [ 414.363304] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 414.366798] LustreError: Skipped 1 previous similar message [ 415.278045] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 417.018285] Lustre: 6028:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880092603b80 x1796521463954496/t38654705666(0) o35->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:201/0 lens 392/456 e 0 to 0 dl 1713296786 ref 1 fl Interpret:/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 417.028142] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:195 to 0x280000400:289) [ 417.028149] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:195 to 0x240000400:289) [ 417.578316] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 417.969662] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 422.129232] Lustre: DEBUG MARKER: == replay-single test 3a: replay failed open(O_DIRECTORY) ========================================================== 15:46:14 (1713296774) [ 423.072489] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 423.483077] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296759/real 1713296759] req@ffff880131395180 x1796521468174720/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296775 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 423.493876] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 18 previous similar messages [ 423.574617] Lustre: Failing over lustre-MDT0000 [ 423.693631] Lustre: server umount lustre-MDT0000 complete [ 436.319073] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 438.129410] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:195 to 0x240000400:321) [ 438.133813] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:195 to 0x280000400:321) [ 438.735769] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 439.149307] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 443.679707] Lustre: DEBUG MARKER: == replay-single test 3b: replay failed open -ENOMEM ===== 15:46:36 (1713296796) [ 444.666913] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 444.907587] Lustre: *** cfs_fail_loc=114, val=0*** [ 445.632700] Lustre: Failing over lustre-MDT0000 [ 445.729354] Lustre: server umount lustre-MDT0000 complete [ 457.321575] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 457.326364] Lustre: Skipped 4 previous similar messages [ 458.016214] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 458.558902] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 458.560951] Lustre: Skipped 2 previous similar messages [ 458.575938] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 458.579453] Lustre: Skipped 2 previous similar messages [ 458.594558] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:195 to 0x240000400:353) [ 458.594574] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:195 to 0x280000400:353) [ 460.269416] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 460.639117] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 462.348402] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 462.350844] Lustre: Skipped 5 previous similar messages [ 464.782720] Lustre: DEBUG MARKER: == replay-single test 3c: replay failed open -ENOMEM ===== 15:46:57 (1713296817) [ 465.736978] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 465.977649] Lustre: *** cfs_fail_loc=128, val=0*** [ 466.698808] Lustre: Failing over lustre-MDT0000 [ 466.807759] Lustre: server umount lustre-MDT0000 complete [ 478.419692] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 478.423443] LustreError: Skipped 2 previous similar messages [ 478.615971] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:195 to 0x240000400:385) [ 478.615983] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:195 to 0x280000400:385) [ 479.328241] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 481.635588] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 482.010884] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 486.111527] Lustre: DEBUG MARKER: == replay-single test 4a: |x| 10 open(O_CREAT)s ========== 15:47:18 (1713296838) [ 486.858671] LustreError: 15608:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 486.863518] LustreError: 15608:0:(osd_handler.c:698:osd_ro()) Skipped 3 previous similar messages [ 487.097616] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 487.643709] Lustre: Failing over lustre-MDT0000 [ 487.753858] Lustre: server umount lustre-MDT0000 complete [ 499.440774] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 499.443112] Lustre: Skipped 5 previous similar messages [ 500.179721] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 501.955125] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:391 to 0x240000400:417) [ 501.955144] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:391 to 0x280000400:417) [ 502.490108] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 502.855297] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 507.021296] Lustre: DEBUG MARKER: == replay-single test 4b: |x| rm 10 files ================ 15:47:39 (1713296859) [ 508.028699] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 508.551423] Lustre: Failing over lustre-MDT0000 [ 508.668729] Lustre: server umount lustre-MDT0000 complete [ 521.112614] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 522.857514] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:423 to 0x240000400:449) [ 522.857907] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:423 to 0x280000400:449) [ 523.386318] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 523.758241] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 527.930488] Lustre: DEBUG MARKER: == replay-single test 5: |x| 220 open(O_CREAT) =========== 15:48:00 (1713296880) [ 528.877648] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 530.480712] Lustre: Failing over lustre-MDT0000 [ 530.592168] Lustre: server umount lustre-MDT0000 complete [ 543.220212] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 544.287600] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:577) [ 544.287602] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:577) [ 545.602417] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 545.998108] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 555.672419] Lustre: DEBUG MARKER: == replay-single test 6a: mkdir + contained create ======= 15:48:28 (1713296908) [ 556.313058] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296892/real 1713296892] req@ffff8800992f1180 x1796521468208640/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713296908 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 556.323662] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 32 previous similar messages [ 556.647531] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 557.140431] Lustre: Failing over lustre-MDT0000 [ 557.237595] Lustre: server umount lustre-MDT0000 complete [ 569.630186] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 571.354855] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:609) [ 571.354857] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:609) [ 571.850908] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 572.203892] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 578.310727] Lustre: DEBUG MARKER: == replay-single test 6b: |X| rmdir ====================== 15:48:50 (1713296930) [ 579.243417] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 579.729458] Lustre: Failing over lustre-MDT0000 [ 579.826015] Lustre: server umount lustre-MDT0000 complete [ 591.496418] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 591.502015] Lustre: Skipped 12 previous similar messages [ 592.311851] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 593.775189] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 593.777601] Lustre: Skipped 5 previous similar messages [ 593.794721] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 593.797678] Lustre: Skipped 5 previous similar messages [ 593.810373] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:641) [ 593.813151] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:641) [ 594.581259] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 594.956758] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 596.524323] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 596.526696] Lustre: Skipped 11 previous similar messages [ 599.023802] Lustre: DEBUG MARKER: == replay-single test 7: mkdir |X| contained create ====== 15:49:11 (1713296951) [ 599.992404] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 600.483546] Lustre: Failing over lustre-MDT0000 [ 600.585844] Lustre: server umount lustre-MDT0000 complete [ 612.098801] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 612.101476] LustreError: Skipped 5 previous similar messages [ 612.222853] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 612.929640] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 613.838107] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:673) [ 613.838121] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:673) [ 615.189463] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 615.560472] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 619.742675] Lustre: DEBUG MARKER: == replay-single test 8: creat open |X| close ============ 15:49:32 (1713296972) [ 620.453765] LustreError: 31126:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 620.455876] LustreError: 31126:0:(osd_handler.c:698:osd_ro()) Skipped 5 previous similar messages [ 620.678154] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 621.194859] Lustre: Failing over lustre-MDT0000 [ 621.291627] Lustre: server umount lustre-MDT0000 complete [ 632.954442] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 633.655902] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 633.913072] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:705) [ 633.913074] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:705) [ 635.978562] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 636.337601] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 640.459502] Lustre: DEBUG MARKER: == replay-single test 9: |X| create (same inum/gen) ====== 15:49:53 (1713296993) [ 641.406135] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 641.895670] Lustre: Failing over lustre-MDT0000 [ 641.996421] Lustre: server umount lustre-MDT0000 complete [ 653.656390] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 653.916137] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:737) [ 653.916139] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:737) [ 654.399385] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 656.703701] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 657.089542] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 661.328622] Lustre: DEBUG MARKER: == replay-single test 10: create |X| rename unlink ======= 15:50:13 (1713297013) [ 662.329791] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 662.846748] Lustre: Failing over lustre-MDT0000 [ 662.975711] Lustre: server umount lustre-MDT0000 complete [ 674.689031] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 675.380925] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 677.172294] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:560 to 0x240000400:769) [ 677.174194] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:560 to 0x280000400:769) [ 677.813827] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 678.255794] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 682.337127] Lustre: DEBUG MARKER: == replay-single test 11: create open write rename |X| create-old-name read ========================================================== 15:50:34 (1713297034) [ 683.319296] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 683.828447] Lustre: Failing over lustre-MDT0000 [ 683.931604] Lustre: server umount lustre-MDT0000 complete [ 695.602327] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 696.305482] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 698.026323] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:771 to 0x240000400:801) [ 698.026756] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:771 to 0x280000400:801) [ 698.535621] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 698.889943] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 702.974417] Lustre: DEBUG MARKER: == replay-single test 12: open, unlink |X| close ========= 15:50:55 (1713297055) [ 703.927941] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 704.427552] Lustre: Failing over lustre-MDT0000 [ 704.546417] Lustre: server umount lustre-MDT0000 complete [ 716.490361] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 717.247848] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 718.975600] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:771 to 0x280000400:833) [ 718.980314] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:771 to 0x240000400:833) [ 719.486138] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 719.852674] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 724.357089] Lustre: DEBUG MARKER: == replay-single test 13: open chmod 0 |x| write close === 15:51:16 (1713297076) [ 725.335247] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 725.819465] Lustre: Failing over lustre-MDT0000 [ 725.915991] Lustre: server umount lustre-MDT0000 complete [ 737.641379] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 738.346456] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 739.045720] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:771 to 0x240000400:865) [ 739.045733] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:835 to 0x280000400:865) [ 740.595647] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 740.970381] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 745.078322] Lustre: DEBUG MARKER: == replay-single test 14: open(O_CREAT), unlink |X| close ========================================================== 15:51:37 (1713297097) [ 746.303329] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 746.961778] Lustre: Failing over lustre-MDT0000 [ 747.076693] Lustre: server umount lustre-MDT0000 complete [ 758.935545] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 758.937281] Lustre: Skipped 11 previous similar messages [ 759.066138] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:867 to 0x280000400:897) [ 759.066146] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:771 to 0x240000400:897) [ 759.663810] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 761.907995] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 762.267514] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 766.461113] Lustre: DEBUG MARKER: == replay-single test 15: open(O_CREAT), unlink |X| touch new, close ========================================================== 15:51:59 (1713297119) [ 767.431111] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 767.916417] Lustre: Failing over lustre-MDT0000 [ 768.023590] Lustre: server umount lustre-MDT0000 complete [ 779.692460] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 779.694247] Lustre: Skipped 1 previous similar message [ 780.405759] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 782.130403] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:899 to 0x240000400:929) [ 782.130406] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:899 to 0x280000400:929) [ 782.639634] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 782.984614] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 787.114781] Lustre: DEBUG MARKER: == replay-single test 16: |X| open(O_CREAT), unlink, touch new, unlink new ========================================================== 15:52:19 (1713297139) [ 788.085934] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 788.585756] Lustre: Failing over lustre-MDT0000 [ 788.685834] Lustre: server umount lustre-MDT0000 complete [ 801.044758] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 802.762923] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:899 to 0x240000400:961) [ 802.762928] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:931 to 0x280000400:961) [ 803.272604] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 803.642130] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 807.734210] Lustre: DEBUG MARKER: == replay-single test 17: |X| open(O_CREAT), |replay| close ========================================================== 15:52:40 (1713297160) [ 808.662129] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 809.151790] Lustre: Failing over lustre-MDT0000 [ 809.251739] Lustre: server umount lustre-MDT0000 complete [ 815.331103] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713297151/real 1713297151] req@ffff8800a225b800 x1796521468315712/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713297167 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 815.331111] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713297151/real 1713297151] req@ffff8800a2259880 x1796521468315776/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713297167 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 815.331117] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 67 previous similar messages [ 821.850201] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 823.589577] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:899 to 0x240000400:993) [ 823.589881] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:963 to 0x280000400:993) [ 824.102508] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 824.463042] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 828.626875] Lustre: DEBUG MARKER: == replay-single test 18: open(O_CREAT), unlink, touch new, close, touch, unlink ========================================================== 15:53:01 (1713297181) [ 829.595851] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 830.140482] Lustre: Failing over lustre-MDT0000 [ 830.248480] Lustre: server umount lustre-MDT0000 complete [ 842.624639] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 844.213161] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:995 to 0x240000400:1025) [ 844.213163] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:995 to 0x280000400:1025) [ 844.849984] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 845.217136] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 849.496080] Lustre: DEBUG MARKER: == replay-single test 19: mcreate, open, write, rename === 15:53:22 (1713297202) [ 850.463279] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 851.001699] Lustre: Failing over lustre-MDT0000 [ 851.095345] Lustre: server umount lustre-MDT0000 complete [ 862.794578] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 862.799986] Lustre: Skipped 25 previous similar messages [ 862.837979] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 862.839715] Lustre: Skipped 3 previous similar messages [ 863.615042] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 864.206969] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 864.209564] Lustre: Skipped 12 previous similar messages [ 864.232891] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 864.235388] Lustre: Skipped 12 previous similar messages [ 864.248133] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1027 to 0x280000400:1057) [ 864.248203] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1027 to 0x240000400:1057) [ 865.878210] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 866.239425] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 867.820157] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 867.821863] Lustre: Skipped 25 previous similar messages [ 870.400414] Lustre: DEBUG MARKER: == replay-single test 20a: |X| open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 15:53:42 (1713297222) [ 871.391048] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 871.940635] Lustre: Failing over lustre-MDT0000 [ 872.037577] Lustre: server umount lustre-MDT0000 complete [ 883.743119] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 883.745626] LustreError: Skipped 12 previous similar messages [ 884.271130] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1059 to 0x240000400:1089) [ 884.271435] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1027 to 0x280000400:1089) [ 884.569663] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 886.823903] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 887.253227] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 891.372531] Lustre: DEBUG MARKER: == replay-single test 20b: write, unlink, eviction, replay (test mds_cleanup_orphans) ========================================================== 15:54:03 (1713297243) [ 892.306749] Lustre: 32325:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting a6006888-f24c-4636-9a0e-7a69e8db8c11 at adminstrative request [ 894.243599] Lustre: Failing over lustre-MDT0000 [ 894.344536] Lustre: server umount lustre-MDT0000 complete [ 906.741571] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 908.467086] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1027 to 0x280000400:1121) [ 908.467094] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1091 to 0x240000400:1121) [ 908.990852] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 909.364236] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 911.108278] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 917.822778] Lustre: DEBUG MARKER: before 6144, after 6144 [ 921.233774] Lustre: DEBUG MARKER: == replay-single test 20c: check that client eviction does not affect file content ========================================================== 15:54:33 (1713297273) [ 921.498671] Lustre: 2808:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting a6006888-f24c-4636-9a0e-7a69e8db8c11 at adminstrative request [ 927.019674] Lustre: DEBUG MARKER: == replay-single test 21: |X| open(O_CREAT), unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 15:54:39 (1713297279) [ 927.885883] LustreError: 3739:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 927.890920] LustreError: 3739:0:(osd_handler.c:698:osd_ro()) Skipped 12 previous similar messages [ 928.122859] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 928.630394] Lustre: Failing over lustre-MDT0000 [ 928.736347] Lustre: server umount lustre-MDT0000 complete [ 941.397392] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 943.254638] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1124 to 0x240000400:1153) [ 943.254640] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1123 to 0x280000400:1153) [ 943.870405] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 944.256115] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 948.671155] Lustre: DEBUG MARKER: == replay-single test 22: open(O_CREAT), |X| unlink, replay, close (test mds_cleanup_orphans) ========================================================== 15:55:01 (1713297301) [ 950.145543] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 950.882196] Lustre: Failing over lustre-MDT0000 [ 951.022862] Lustre: server umount lustre-MDT0000 complete [ 964.132468] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 964.457703] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1155 to 0x280000400:1185) [ 964.457710] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1155 to 0x240000400:1185) [ 966.623042] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 967.067224] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 971.309264] Lustre: DEBUG MARKER: == replay-single test 23: open(O_CREAT), |X| unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 15:55:23 (1713297323) [ 972.587754] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 973.191313] Lustre: Failing over lustre-MDT0000 [ 973.335927] Lustre: server umount lustre-MDT0000 complete [ 986.540943] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 988.399414] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1187 to 0x240000400:1217) [ 988.403178] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1187 to 0x280000400:1217) [ 988.934310] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 989.456344] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 994.439046] Lustre: DEBUG MARKER: == replay-single test 24: open(O_CREAT), replay, unlink, close (test mds_cleanup_orphans) ========================================================== 15:55:47 (1713297347) [ 995.495142] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 995.986726] Lustre: Failing over lustre-MDT0000 [ 996.086194] Lustre: server umount lustre-MDT0000 complete [ 1008.370180] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1008.373087] Lustre: Skipped 5 previous similar messages [ 1009.415812] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1009.468944] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1219 to 0x280000400:1249) [ 1009.468946] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1219 to 0x240000400:1249) [ 1012.089761] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1012.518822] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1016.930602] Lustre: DEBUG MARKER: == replay-single test 25: open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 15:56:09 (1713297369) [ 1018.212726] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1018.657531] Lustre: Failing over lustre-MDT0000 [ 1018.758043] Lustre: server umount lustre-MDT0000 complete [ 1031.575716] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1033.394355] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1251 to 0x240000400:1281) [ 1033.394365] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1219 to 0x280000400:1281) [ 1034.127756] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1034.609657] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1039.089565] Lustre: DEBUG MARKER: == replay-single test 26: |X| open(O_CREAT), unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 15:56:31 (1713297391) [ 1040.252662] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1040.868592] Lustre: Failing over lustre-MDT0000 [ 1040.990958] Lustre: server umount lustre-MDT0000 complete [ 1054.494116] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1054.573544] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1283 to 0x280000400:1313) [ 1054.573836] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1283 to 0x240000400:1313) [ 1056.989509] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1057.458975] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1062.086582] Lustre: DEBUG MARKER: == replay-single test 27: |X| open(O_CREAT), unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 15:56:54 (1713297414) [ 1063.381176] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1063.994722] Lustre: Failing over lustre-MDT0000 [ 1064.115610] Lustre: server umount lustre-MDT0000 complete [ 1076.802788] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1078.565172] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1315 to 0x240000400:1345) [ 1078.565303] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1315 to 0x280000400:1345) [ 1079.153206] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1079.585556] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1084.289739] Lustre: DEBUG MARKER: == replay-single test 28: open(O_CREAT), |X| unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 15:57:16 (1713297436) [ 1085.557961] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1086.287637] Lustre: Failing over lustre-MDT0000 [ 1086.403093] Lustre: server umount lustre-MDT0000 complete [ 1099.637210] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1347 to 0x240000400:1377) [ 1099.637242] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1347 to 0x280000400:1377) [ 1099.910594] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1102.669445] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1103.222093] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1108.618943] Lustre: DEBUG MARKER: == replay-single test 29: open(O_CREAT), |X| unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 15:57:41 (1713297461) [ 1109.853097] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1110.554340] Lustre: Failing over lustre-MDT0000 [ 1110.707038] Lustre: server umount lustre-MDT0000 complete [ 1124.123439] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1124.653694] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1379 to 0x280000400:1409) [ 1124.653696] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1379 to 0x240000400:1409) [ 1126.842377] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1127.407147] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1132.815429] Lustre: DEBUG MARKER: == replay-single test 30: open(O_CREAT) two, unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 15:58:05 (1713297485) [ 1133.980646] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1134.546267] Lustre: Failing over lustre-MDT0000 [ 1134.680000] Lustre: server umount lustre-MDT0000 complete [ 1147.640717] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1149.414649] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1411 to 0x240000400:1441) [ 1149.417495] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1411 to 0x280000400:1441) [ 1150.109901] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1150.529810] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1155.136703] Lustre: DEBUG MARKER: == replay-single test 31: open(O_CREAT) two, unlink one, |X| unlink one, close two (test mds_cleanup_orphans) ========================================================== 15:58:27 (1713297507) [ 1156.325786] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1156.900978] Lustre: Failing over lustre-MDT0000 [ 1157.009311] Lustre: server umount lustre-MDT0000 complete [ 1169.725615] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1443 to 0x240000400:1473) [ 1169.725964] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1443 to 0x280000400:1473) [ 1169.852367] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1172.252371] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1172.619190] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1177.239145] Lustre: DEBUG MARKER: == replay-single test 32: close() notices client eviction; close() after client eviction ========================================================== 15:58:49 (1713297529) [ 1177.519745] Lustre: 32268:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting a6006888-f24c-4636-9a0e-7a69e8db8c11 at adminstrative request [ 1184.183588] Lustre: DEBUG MARKER: == replay-single test 33a: fid seq shouldn't be reused after abort recovery ========================================================== 15:58:56 (1713297536) [ 1185.068529] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1185.705128] Lustre: Failing over lustre-MDT0000 [ 1185.826708] Lustre: server umount lustre-MDT0000 complete [ 1188.690364] Lustre: lustre-MDT0000: Aborting client recovery [ 1188.691918] LustreError: 1549:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1188.694220] Lustre: 1667:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1188.696350] Lustre: 1667:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client a6006888-f24c-4636-9a0e-7a69e8db8c11@ [ 1188.698884] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1188.707488] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1188.737067] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1480 to 0x240000400:1505) [ 1188.738073] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1479 to 0x280000400:1505) [ 1189.693517] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1196.693072] Lustre: DEBUG MARKER: == replay-single test 33b: test fid seq allocation ======= 15:59:09 (1713297549) [ 1197.786301] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1198.302584] Lustre: Failing over lustre-MDT0000 [ 1198.399490] Lustre: server umount lustre-MDT0000 complete [ 1200.536691] Lustre: *** cfs_fail_loc=1311, val=0*** [ 1200.543486] Lustre: lustre-MDT0000: Aborting client recovery [ 1200.544662] LustreError: 4071:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1200.546744] Lustre: 4313:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1200.549053] Lustre: 4313:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1200.551041] Lustre: 4313:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client a6006888-f24c-4636-9a0e-7a69e8db8c11@ [ 1200.553987] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1200.563478] Lustre: lustre-MDT0000-osd: cancel update llog [0x200015bc0:0x1:0x0] [ 1200.592892] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1516 to 0x240000400:1537) [ 1200.592909] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1516 to 0x280000400:1537) [ 1200.595218] mount.lustre (4071) used greatest stack depth: 9992 bytes left [ 1201.322486] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1204.336895] Lustre: *** cfs_fail_loc=1311, val=0*** [ 1207.834640] Lustre: DEBUG MARKER: == replay-single test 34: abort recovery before client does replay (test mds_cleanup_orphans) ========================================================== 15:59:20 (1713297560) [ 1208.906695] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1209.412681] Lustre: Failing over lustre-MDT0000 [ 1209.532348] Lustre: server umount lustre-MDT0000 complete [ 1212.140249] Lustre: lustre-MDT0000: Aborting client recovery [ 1212.142188] LustreError: 6715:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1212.145103] Lustre: 6870:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1212.147302] Lustre: 6870:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1212.149398] Lustre: 6870:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client a6006888-f24c-4636-9a0e-7a69e8db8c11@ [ 1212.152330] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1212.160493] Lustre: lustre-MDT0000-osd: cancel update llog [0x200016778:0x1:0x0] [ 1212.192687] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1543 to 0x280000400:1569) [ 1212.194846] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1544 to 0x240000400:1569) [ 1212.933202] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1219.002625] Lustre: DEBUG MARKER: == replay-single test 35: test recovery from llog for unlink op ========================================================== 15:59:31 (1713297571) [ 1219.280894] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1219.282765] LustreError: 6767:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800b188ea00 x1796521464397952/t201863462916(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:242/0 lens 512/456 e 0 to 0 dl 1713297582 ref 1 fl Interpret:/200/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 1221.875217] Lustre: Failing over lustre-MDT0000 [ 1221.987090] Lustre: server umount lustre-MDT0000 complete [ 1224.404276] Lustre: lustre-MDT0000: Aborting client recovery [ 1224.406094] LustreError: 9033:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1224.409211] Lustre: 9170:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1224.412082] Lustre: 9170:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1224.416171] Lustre: 9170:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client a6006888-f24c-4636-9a0e-7a69e8db8c11@ [ 1224.421682] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1224.431320] Lustre: lustre-MDT0000-osd: cancel update llog [0x200017330:0x1:0x0] [ 1224.460687] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1571 to 0x240000400:1601) [ 1224.463189] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1543 to 0x280000400:1601) [ 1225.316148] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1231.349271] Lustre: DEBUG MARKER: SKIP: replay-single test_36 skipping ALWAYS excluded test 36 [ 1232.726871] Lustre: DEBUG MARKER: == replay-single test 37: abort recovery before client does replay (test mds_cleanup_orphans for directories) ========================================================== 15:59:45 (1713297585) [ 1233.939444] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1234.780454] Lustre: Failing over lustre-MDT0000 [ 1234.798613] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.30@tcp (stopping) [ 1234.871423] Lustre: server umount lustre-MDT0000 complete [ 1237.276586] Lustre: lustre-MDT0000: Aborting client recovery [ 1237.277893] LustreError: 11707:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1237.280052] Lustre: 11842:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1237.283900] Lustre: 11842:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1237.287441] Lustre: 11842:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client a6006888-f24c-4636-9a0e-7a69e8db8c11@ [ 1237.292495] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1237.302086] Lustre: lustre-MDT0000-osd: cancel update llog [0x200017b00:0x1:0x0] [ 1237.330661] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:1543 to 0x280000400:1633) [ 1237.330666] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:1571 to 0x240000400:1633) [ 1238.213853] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1245.874415] Lustre: DEBUG MARKER: == replay-single test 38: test recovery from unlink llog (test llog_gen_rec) ========================================================== 15:59:58 (1713297598) [ 1254.717143] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1255.321359] Lustre: Failing over lustre-MDT0000 [ 1255.436475] Lustre: server umount lustre-MDT0000 complete [ 1267.531552] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1267.534164] Lustre: Skipped 22 previous similar messages [ 1268.414849] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1269.904440] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:2034 to 0x240000400:2049) [ 1269.904442] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2034 to 0x280000400:2049) [ 1270.980246] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1271.421074] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1281.899352] Lustre: DEBUG MARKER: == replay-single test 39: test recovery from unlink llog (test llog_gen_rec) ========================================================== 16:00:34 (1713297634) [ 1287.574831] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1290.409570] Lustre: Failing over lustre-MDT0000 [ 1290.514672] Lustre: server umount lustre-MDT0000 complete [ 1302.703438] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1302.705896] Lustre: Skipped 24 previous similar messages [ 1303.774512] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1305.859492] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:2450 to 0x240000400:2465) [ 1305.859917] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2450 to 0x280000400:2465) [ 1306.681359] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1307.244671] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1317.904798] Lustre: DEBUG MARKER: == replay-single test 41: read from a valid osc while other oscs are invalid ========================================================== 16:01:10 (1713297670) [ 1318.455429] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 1318.756395] Lustre: lustre-OST0001: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 1318.760196] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 1322.493169] Lustre: DEBUG MARKER: == replay-single test 42: recovery after ost failure ===== 16:01:14 (1713297674) [ 1328.318496] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 1331.881742] Lustre: Failing over lustre-OST0000 [ 1331.924990] Lustre: server umount lustre-OST0000 complete [ 1332.731418] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1332.737520] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1334.962661] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.30@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1337.755420] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1339.967093] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.30@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1345.419646] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1391.882273] Lustre: DEBUG MARKER: == replay-single test 43: mds osc import failure during recovery; don't LBUG ========================================================== 16:02:24 (1713297744) [ 1393.161158] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1394.094806] Lustre: Failing over lustre-MDT0000 [ 1394.139420] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1394.139600] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 1394.147042] Lustre: Skipped 43 previous similar messages [ 1394.211966] Lustre: server umount lustre-MDT0000 complete [ 1406.278220] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1406.280833] LustreError: Skipped 19 previous similar messages [ 1407.445729] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1409.439643] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1409.444970] Lustre: Skipped 16 previous similar messages [ 1409.466569] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1409.471071] Lustre: Skipped 16 previous similar messages [ 1409.495821] Lustre: *** cfs_fail_loc=204, val=2147483648*** [ 1409.495884] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2866 to 0x280000400:2881) [ 1410.317030] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1410.870750] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1414.421027] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 1414.424206] Lustre: Skipped 43 previous similar messages [ 1425.495094] Lustre: 26805:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713297761/real 1713297761] req@ffff88009fda5180 x1796521468675712/t0(0) o5->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 432/432 e 0 to 1 dl 1713297777 ref 2 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'osp-pre-0-0.0' uid:0 gid:0 [ 1425.510871] Lustre: 26805:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 92 previous similar messages [ 1425.514046] LustreError: 26805:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -11 [ 1425.514708] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 1426.252567] Lustre: DEBUG MARKER: == replay-single test 44a: race in target handle connect ========================================================== 16:02:58 (1713297778) [ 1426.520315] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:2867 to 0x240000400:2913) [ 1427.596016] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1432.598151] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1432.601187] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1433.113729] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1438.118164] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1438.123302] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1438.804656] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1443.808212] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1443.812540] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1444.496279] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1449.500167] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1450.187207] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1455.191190] LustreError: 27591:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1455.197025] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1455.200103] Lustre: Skipped 1 previous similar message [ 1461.257349] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1461.259120] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 1 previous similar message [ 1466.261141] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1466.265584] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 1 previous similar message [ 1471.919136] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1471.922837] Lustre: Skipped 2 previous similar messages [ 1477.766323] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1477.769360] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 2 previous similar messages [ 1482.773092] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1482.776308] LustreError: 28913:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 2 previous similar messages [ 1486.112703] Lustre: DEBUG MARKER: == replay-single test 44b: race in target handle connect ========================================================== 16:03:58 (1713297838) [ 1486.594890] LustreError: 28913:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1496.595574] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1501.599666] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1506.605645] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1511.613971] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1516.621945] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1526.605114] LustreError: 28913:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1526.608041] Lustre: 28913:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff88009fda7480 x1796521465705792/t0(0) o38->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:0/0 lens 520/416 e 0 to 0 dl 1713297858 ref 1 fl Complete:H/200/0 rc 0/0 job:'lctl.0' uid:0 gid:0 [ 1526.637871] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1526.641803] Lustre: Skipped 3 previous similar messages [ 1527.144614] LustreError: 27591:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1537.143939] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1537.146536] Lustre: Skipped 1 previous similar message [ 1557.181828] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1557.184173] Lustre: Skipped 4 previous similar messages [ 1567.150109] LustreError: 27591:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1567.152758] Lustre: 27591:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff8800933a4e00 x1796521465707648/t0(0) o38->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:0/0 lens 520/416 e 0 to 0 dl 1713297899 ref 1 fl Complete:H/200/0 rc 0/0 job:'lctl.0' uid:0 gid:0 [ 1567.197888] LustreError: 26770:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1592.198136] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1592.200419] Lustre: Skipped 1 previous similar message [ 1607.201657] LustreError: 26770:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1607.204798] Lustre: 26770:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff88009fdd3800 x1796521465708736/t0(0) o38->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:0/0 lens 520/416 e 0 to 0 dl 1713297939 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1607.230260] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1607.234568] Lustre: Skipped 2 previous similar messages [ 1607.236092] LustreError: 26772:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1647.241103] LustreError: 26772:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1647.243132] Lustre: 26772:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff880091182680 x1796521465709632/t0(0) o38->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:0/0 lens 520/416 e 0 to 0 dl 1713297979 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1647.261987] LustreError: 26772:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1672.262104] Lustre: lustre-MDT0000: Export ffff8801318d9800 already connecting from 192.168.201.30@tcp [ 1672.265856] Lustre: Skipped 5 previous similar messages [ 1687.266182] LustreError: 26772:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1687.270886] Lustre: 26772:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff8800a5de3480 x1796521465710528/t0(0) o38->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:0/0 lens 520/416 e 0 to 0 dl 1713298019 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1687.294499] LustreError: 26772:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1711.399051] LustreError: 26772:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout interrupted [ 1711.401986] Lustre: 26772:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/4s); client may timeout req@ffff880092edb100 x1796521465711424/t0(0) o38->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:0/0 lens 520/416 e 0 to 0 dl 1713298059 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1713.861417] Lustre: DEBUG MARKER: == replay-single test 44c: race in target handle connect ========================================================== 16:07:46 (1713298066) [ 1714.573288] LustreError: 564:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 1714.575421] LustreError: 564:0:(osd_handler.c:698:osd_ro()) Skipped 18 previous similar messages [ 1714.816124] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1715.870493] Lustre: Failing over lustre-MDT0000 [ 1716.009342] Lustre: server umount lustre-MDT0000 complete [ 1718.243222] Lustre: *** cfs_fail_loc=712, val=0*** [ 1718.244974] LustreError: 20855:0:(service.c:1236:ptlrpc_check_req()) @@@ Invalid replay without recovery req@ffff88012cb24e00 x1796521468708608/t0(0) o400->lustre-MDT0000-mdtlov_UUID@0@lo:0/0 lens 224/0 e 0 to 0 dl 0 ref 1 fl New:/2c0/ffffffff rc 0/-1 job:'ptlrpcd_rcv.0' uid:0 gid:0 [ 1718.284207] Lustre: lustre-MDT0000: Aborting client recovery [ 1718.285357] LustreError: 1619:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1718.287342] Lustre: 1745:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1718.289603] Lustre: 1745:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1718.291397] Lustre: 1745:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client a6006888-f24c-4636-9a0e-7a69e8db8c11@ [ 1718.294103] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1718.303398] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000182d0:0x1:0x0] [ 1718.334527] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2866 to 0x280000400:2913) [ 1718.334529] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:2867 to 0x240000400:2945) [ 1719.132072] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1722.734337] Lustre: Failing over lustre-MDT0000 [ 1722.836273] Lustre: server umount lustre-MDT0000 complete [ 1735.262459] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1737.126788] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2866 to 0x280000400:2945) [ 1737.126793] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:2867 to 0x240000400:2977) [ 1737.666399] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1738.043934] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1742.459558] Lustre: DEBUG MARKER: == replay-single test 45: Handle failed close ============ 16:08:15 (1713298095) [ 1742.479086] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 1742.481975] Lustre: Skipped 2 previous similar messages [ 1747.637945] Lustre: DEBUG MARKER: == replay-single test 46: Don't leak file handle after open resend (3325) ========================================================== 16:08:20 (1713298100) [ 1747.944519] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 1747.945746] LustreError: 3169:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012c190000 x1796521465736320/t0(0) o700->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:16/0 lens 264/248 e 0 to 0 dl 1713298111 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1764.981210] Lustre: Failing over lustre-MDT0000 [ 1765.094469] Lustre: server umount lustre-MDT0000 complete [ 1778.007280] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:2979 to 0x240000400:3009) [ 1778.007284] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2947 to 0x280000400:2977) [ 1778.134773] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1780.581881] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1780.928084] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1785.545079] Lustre: DEBUG MARKER: == replay-single test 47: MDS->OSC failure during precreate cleanup (2824) ========================================================== 16:08:58 (1713298138) [ 1786.136334] Lustre: Failing over lustre-OST0000 [ 1786.146354] Lustre: server umount lustre-OST0000 complete [ 1787.099498] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 1787.099687] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1787.099689] LustreError: Skipped 1 previous similar message [ 1797.115339] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1797.119413] LustreError: Skipped 3 previous similar messages [ 1798.070443] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 1798.072918] Lustre: Skipped 8 previous similar messages [ 1799.281855] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1801.534535] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1801.886713] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 1867.573303] Lustre: DEBUG MARKER: == replay-single test 48: MDS->OSC failure during precreate cleanup (2824) ========================================================== 16:10:20 (1713298220) [ 1868.572396] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1869.127651] Lustre: Failing over lustre-MDT0000 [ 1869.227882] Lustre: server umount lustre-MDT0000 complete [ 1881.688511] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1883.216308] Lustre: *** cfs_fail_loc=216, val=0*** [ 1883.216318] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3030 to 0x240000400:3073) [ 1883.220717] LustreError: 11370:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0001-osc-MDT0000: cannot cleanup orphans: rc = -30 [ 1884.224366] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:2998 to 0x280000400:3041) [ 1946.722981] Lustre: DEBUG MARKER: == replay-single test 50: Double OSC recovery, don't LASSERT (3812) ========================================================== 16:11:39 (1713298299) [ 1955.345580] Lustre: DEBUG MARKER: == replay-single test 52: time out lock replay (3764) ==== 16:11:47 (1713298307) [ 1956.100876] Lustre: Failing over lustre-MDT0000 [ 1956.201592] Lustre: server umount lustre-MDT0000 complete [ 1967.878617] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1967.880941] Lustre: Skipped 7 previous similar messages [ 1968.273058] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 1968.274752] LustreError: 14900:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012bf5b100 x1796521465791872/t0(0) o101->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:236/0 lens 328/344 e 0 to 0 dl 1713298331 ref 1 fl Complete:/240/0 rc 0/0 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1968.641584] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1984.272915] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:53 [ 1984.297402] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3085 to 0x240000400:3105) [ 1984.299149] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3052 to 0x280000400:3073) [ 1984.812797] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1985.174804] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1989.626024] Lustre: DEBUG MARKER: == replay-single test 53a: |X| close request while two MDC requests in flight ========================================================== 16:12:22 (1713298342) [ 1990.896315] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 1992.045953] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1992.540010] Lustre: Failing over lustre-MDT0000 [ 1992.665194] Lustre: server umount lustre-MDT0000 complete [ 2004.326376] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2004.332810] Lustre: Skipped 13 previous similar messages [ 2005.079918] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2006.803951] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3107 to 0x240000400:3137) [ 2006.803954] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3052 to 0x280000400:3105) [ 2007.327317] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2007.710113] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2012.008533] Lustre: DEBUG MARKER: == replay-single test 53b: |X| open request while two MDC requests in flight ========================================================== 16:12:44 (1713298364) [ 2012.285125] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2014.425991] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2014.914796] Lustre: Failing over lustre-MDT0000 [ 2015.011853] Lustre: server umount lustre-MDT0000 complete [ 2026.564241] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2026.566969] LustreError: Skipped 6 previous similar messages [ 2027.390776] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2028.285047] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 2028.287823] Lustre: Skipped 6 previous similar messages [ 2028.298735] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 2028.301454] Lustre: Skipped 6 previous similar messages [ 2028.313752] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3052 to 0x280000400:3137) [ 2028.313753] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3139 to 0x240000400:3169) [ 2029.615023] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2029.984353] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2031.676452] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 2031.679097] Lustre: Skipped 15 previous similar messages [ 2034.198749] Lustre: DEBUG MARKER: == replay-single test 53c: |X| open request and close request while two MDC requests in flight ========================================================== 16:13:06 (1713298386) [ 2034.454594] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2035.675072] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713298371/real 1713298371] req@ffff88012c961180 x1796521468765440/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713298387 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 2035.675078] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713298371/real 1713298371] req@ffff88012c961500 x1796521468765376/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713298387 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 2035.675081] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 19 previous similar messages [ 2036.582071] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2037.039537] Lustre: Failing over lustre-MDT0000 [ 2037.138052] Lustre: server umount lustre-MDT0000 complete [ 2049.583133] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2050.483117] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3139 to 0x240000400:3201) [ 2050.487154] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3139 to 0x280000400:3169) [ 2055.903958] Lustre: DEBUG MARKER: == replay-single test 53d: close reply while two MDC requests in flight ========================================================== 16:13:28 (1713298408) [ 2057.200663] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2057.202504] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 2057.204190] LustreError: 23037:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88013230f100 x1796521465812800/t257698037777(0) o35->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:325/0 lens 392/456 e 0 to 0 dl 1713298420 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2057.926475] Lustre: Failing over lustre-MDT0000 [ 2058.025592] Lustre: server umount lustre-MDT0000 complete [ 2070.421110] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2070.508444] Lustre: 25401:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009947df80 x1796521465812800/t257698037777(0) o35->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:338/0 lens 392/456 e 0 to 0 dl 1713298433 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2070.518513] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3139 to 0x240000400:3233) [ 2070.522359] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3171 to 0x280000400:3201) [ 2072.641417] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2073.004602] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2077.114085] Lustre: DEBUG MARKER: == replay-single test 53e: |X| open reply while two MDC requests in flight ========================================================== 16:13:49 (1713298429) [ 2077.370143] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2077.371383] LustreError: 25399:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012c191880 x1796521465818624/t261993005072(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:345/0 lens 504/448 e 0 to 0 dl 1713298440 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2079.513762] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2080.102570] Lustre: Failing over lustre-MDT0000 [ 2080.208496] Lustre: server umount lustre-MDT0000 complete [ 2092.579660] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2093.383872] Lustre: 28118:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880099551180 x1796521465818624/t261993005072(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:361/0 lens 504/2880 e 0 to 0 dl 1713298456 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2093.392617] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3139 to 0x240000400:3265) [ 2093.392625] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3203 to 0x280000400:3233) [ 2094.798113] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2095.167209] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2099.370292] Lustre: DEBUG MARKER: == replay-single test 53f: |X| open reply and close reply while two MDC requests in flight ========================================================== 16:14:11 (1713298451) [ 2099.623934] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2099.625207] LustreError: 28118:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88013230ca80 x1796521465824640/t266287972368(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:367/0 lens 504/448 e 0 to 0 dl 1713298462 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2100.845483] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2101.741191] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2102.209630] Lustre: Failing over lustre-MDT0000 [ 2102.305583] Lustre: server umount lustre-MDT0000 complete [ 2114.694800] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2115.638408] Lustre: 30784:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a554e680 x1796521465824768/t266287972369(0) o35->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:383/0 lens 392/456 e 0 to 0 dl 1713298478 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2115.645233] Lustre: 30784:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 2115.646389] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3139 to 0x240000400:3297) [ 2115.646856] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3235 to 0x280000400:3265) [ 2120.855440] Lustre: DEBUG MARKER: == replay-single test 53g: |X| drop open reply and close request while close and open are both in flight ========================================================== 16:14:33 (1713298473) [ 2121.117448] LustreError: 30819:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880099551180 x1796521465830272/t270582939664(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:389/0 lens 504/448 e 0 to 0 dl 1713298484 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2121.127834] LustreError: 30819:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 2122.346027] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 2122.348413] Lustre: Skipped 1 previous similar message [ 2123.460909] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2123.936452] Lustre: Failing over lustre-MDT0000 [ 2123.947404] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 2123.949354] Lustre: Skipped 2 previous similar messages [ 2124.029324] Lustre: server umount lustre-MDT0000 complete [ 2135.662678] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.30@tcp (not set up) [ 2136.435606] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2136.815472] Lustre: 1104:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800b188fb80 x1796521465830272/t270582939664(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:404/0 lens 504/2880 e 0 to 0 dl 1713298499 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2136.822537] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3235 to 0x280000400:3297) [ 2136.822558] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3299 to 0x240000400:3329) [ 2141.884632] Lustre: DEBUG MARKER: == replay-single test 53h: open request and close reply while two MDC requests in flight ========================================================== 16:14:54 (1713298494) [ 2142.145983] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2143.365312] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2143.367365] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 2143.368976] Lustre: Skipped 2 previous similar messages [ 2145.311731] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2145.797765] Lustre: Failing over lustre-MDT0000 [ 2145.901371] Lustre: server umount lustre-MDT0000 complete [ 2158.160357] Lustre: 3694:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009207ad80 x1796521465835840/t274877906960(0) o35->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:426/0 lens 392/456 e 0 to 0 dl 1713298521 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2158.172355] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3299 to 0x240000400:3361) [ 2158.172370] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3299 to 0x280000400:3329) [ 2158.295488] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2163.913638] Lustre: DEBUG MARKER: == replay-single test 55: let MDS_CHECK_RESENT return the original return code instead of 0 ========================================================== 16:15:16 (1713298516) [ 2164.159230] Lustre: *** cfs_fail_loc=12b, val=2147483991*** [ 2164.160864] LustreError: 3691:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012c963480 x1796521465840640/t279172874255(0) o101->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:432/0 lens 664/608 e 0 to 0 dl 1713298527 ref 1 fl Interpret:/200/0 rc 301/0 job:'touch.0' uid:0 gid:0 [ 2164.167457] LustreError: 3691:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 2180.159075] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnecting [ 2180.162543] Lustre: Skipped 4 previous similar messages [ 2180.165452] Lustre: 3692:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a5de0e00 x1796521465840640/t279172874255(0) o101->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:448/0 lens 664/3488 e 0 to 0 dl 1713298543 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 2183.062245] Lustre: DEBUG MARKER: == replay-single test 56: don't replay a symlink open request (3440) ========================================================== 16:15:35 (1713298535) [ 2184.051114] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2184.530301] Lustre: Failing over lustre-MDT0000 [ 2184.626257] Lustre: server umount lustre-MDT0000 complete [ 2196.991638] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2198.708380] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3299 to 0x240000400:3393) [ 2198.711188] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3331 to 0x280000400:3361) [ 2199.207085] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2199.565647] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2213.703313] Lustre: DEBUG MARKER: == replay-single test 57: test recovery from llog for setattr op ========================================================== 16:16:06 (1713298566) [ 2214.893399] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2215.451266] Lustre: Failing over lustre-MDT0000 [ 2215.577708] Lustre: server umount lustre-MDT0000 complete [ 2228.133648] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2229.866572] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:3395 to 0x240000400:3425) [ 2229.866989] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3331 to 0x280000400:3393) [ 2230.384863] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2230.747414] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2232.441930] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 2238.221521] Lustre: DEBUG MARKER: == replay-single test 58a: test recovery from llog for setattr op (test llog_gen_rec) ========================================================== 16:16:30 (1713298590) [ 2244.996979] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2245.484308] Lustre: Failing over lustre-MDT0000 [ 2245.640055] Lustre: server umount lustre-MDT0000 complete [ 2258.050490] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2259.835625] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:4644 to 0x280000400:4673) [ 2259.835628] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:4676 to 0x240000400:4705) [ 2260.370544] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2260.726272] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2276.716670] Lustre: DEBUG MARKER: == replay-single test 58b: test replay of setxattr op ==== 16:17:09 (1713298629) [ 2277.912135] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2278.407605] Lustre: Failing over lustre-MDT0000 [ 2278.512514] Lustre: server umount lustre-MDT0000 complete [ 2290.920654] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2291.856275] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:4675 to 0x280000400:4705) [ 2291.861469] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:4676 to 0x240000400:4737) [ 2293.142753] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2293.498194] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2295.470074] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount FULL mgc.*.mgs_server_uuid [ 2295.826722] Lustre: DEBUG MARKER: mgc.*.mgs_server_uuid in FULL state after 0 sec [ 2298.745943] Lustre: DEBUG MARKER: == replay-single test 58c: resend/reconstruct setxattr op ========================================================== 16:17:31 (1713298651) [ 2304.146351] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 2320.647013] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2320.649159] Lustre: Skipped 1 previous similar message [ 2320.650291] LustreError: 16558:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880099155050 x1796521467308480/t296352743433(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:588/0 lens 66040/440 e 0 to 0 dl 1713298683 ref 1 fl Interpret:/200/0 rc 0/0 job:'setfattr.0' uid:0 gid:0 [ 2336.645419] Lustre: 16560:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009ba4aa00 x1796521467308480/t296352743433(0) o36->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:604/0 lens 66040/440 e 0 to 0 dl 1713298699 ref 1 fl Interpret:/202/0 rc 0/0 job:'setfattr.0' uid:0 gid:0 [ 2339.173558] Lustre: DEBUG MARKER: SKIP: replay-single test_59 skipping ALWAYS excluded test 59 [ 2340.537126] Lustre: DEBUG MARKER: == replay-single test 60: test llog post recovery init vs llog unlink ========================================================== 16:18:13 (1713298693) [ 2342.505971] LustreError: 19931:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 2342.508949] LustreError: 19931:0:(osd_handler.c:698:osd_ro()) Skipped 12 previous similar messages [ 2342.744580] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2343.383503] Lustre: Failing over lustre-MDT0000 [ 2343.488622] Lustre: server umount lustre-MDT0000 complete [ 2355.881395] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2356.849681] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:4839 to 0x240000400:4865) [ 2356.849683] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:4806 to 0x280000400:4833) [ 2358.095077] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2358.448885] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2362.875750] Lustre: DEBUG MARKER: == replay-single test 61a: test race llog recovery vs llog cleanup ========================================================== 16:18:35 (1713298715) [ 2366.212474] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 2368.758391] Lustre: Failing over lustre-OST0000 [ 2368.785609] Lustre: server umount lustre-OST0000 complete [ 2370.171315] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2370.173608] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2370.178245] LustreError: Skipped 1 previous similar message [ 2381.504114] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2392.472340] Lustre: Failing over lustre-OST0000 [ 2392.481092] Lustre: server umount lustre-OST0000 complete [ 2404.164866] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 2404.166579] Lustre: Skipped 16 previous similar messages [ 2405.386294] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2407.620742] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2407.977336] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2442.996729] Lustre: DEBUG MARKER: == replay-single test 61b: test race mds llog sync vs llog cleanup ========================================================== 16:19:55 (1713298795) [ 2443.838235] Lustre: Failing over lustre-MDT0000 [ 2443.954484] Lustre: server umount lustre-MDT0000 complete [ 2456.742839] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2456.869923] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5266 to 0x240000400:5281) [ 2456.869940] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5234 to 0x280000400:5249) [ 2467.944404] Lustre: Failing over lustre-MDT0000 [ 2468.066676] Lustre: server umount lustre-MDT0000 complete [ 2481.018976] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2481.915242] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5266 to 0x240000400:5313) [ 2481.915244] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5234 to 0x280000400:5281) [ 2483.331416] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2483.692177] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2487.990848] Lustre: DEBUG MARKER: == replay-single test 61c: test race mds llog sync vs llog cleanup ========================================================== 16:20:40 (1713298840) [ 2498.930714] Lustre: Failing over lustre-OST0000 [ 2498.946160] Lustre: server umount lustre-OST0000 complete [ 2500.043384] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2500.043529] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2500.043531] LustreError: Skipped 8 previous similar messages [ 2511.786113] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2514.012122] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2514.366241] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2519.040128] Lustre: DEBUG MARKER: == replay-single test 61d: error in llog_setup should cleanup the llog context correctly ========================================================== 16:21:11 (1713298871) [ 2519.557457] Lustre: Failing over lustre-MDT0000 [ 2519.659451] Lustre: server umount lustre-MDT0000 complete [ 2521.737388] Lustre: *** cfs_fail_loc=605, val=0*** [ 2521.738590] LustreError: 2101:0:(llog_obd.c:207:llog_setup()) MGS: ctxt 0 lop_setup=ffffffffa0552b10 failed: rc = -95 [ 2521.740799] LustreError: 2101:0:(obd_config.c:797:class_setup()) setup MGS failed (-95) [ 2521.742476] LustreError: 2101:0:(obd_mount.c:215:lustre_start_simple()) MGS setup error -95 [ 2521.744191] LustreError: 2101:0:(tgt_mount.c:135:server_deregister_mount()) MGS not registered [ 2521.746153] LustreError: 15e-a: Failed to start MGS 'MGS' (-95). Is the 'mgs' module loaded? [ 2521.747847] LustreError: 2101:0:(tgt_mount.c:1755:server_put_super()) no obd lustre-MDT0000 [ 2521.755411] Lustre: server umount lustre-MDT0000 complete [ 2521.756625] LustreError: 2101:0:(super25.c:189:lustre_fill_super()) llite: Unable to mount : rc = -95 [ 2524.141606] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2526.998438] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5315 to 0x240000400:5345) [ 2526.998441] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5283 to 0x280000400:5313) [ 2528.427204] Lustre: DEBUG MARKER: == replay-single test 62: don't mis-drop resent replay === 16:21:20 (1713298880) [ 2529.814532] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2530.661538] Lustre: Failing over lustre-MDT0000 [ 2530.763339] Lustre: server umount lustre-MDT0000 complete [ 2543.108405] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2544.801046] Lustre: *** cfs_fail_loc=707, val=0*** [ 2561.006118] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:53 [ 2561.108699] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5358 to 0x240000400:5377) [ 2561.108701] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5327 to 0x280000400:5345) [ 2561.749605] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2562.140924] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2567.985857] Lustre: DEBUG MARKER: == replay-single test 65a: AT: verify early replies ====== 16:22:00 (1713298920) [ 2590.666589] LustreError: 5468:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 11000ms [ 2601.670061] LustreError: 5468:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 2614.368603] Lustre: DEBUG MARKER: == replay-single test 65b: AT: verify early replies on packed reply / bulk ========================================================== 16:22:46 (1713298966) [ 2637.066972] LustreError: 20945:0:(tgt_handler.c:2759:tgt_brw_write()) cfs_fail_timeout id 224 sleeping for 11000ms [ 2648.070125] LustreError: 20945:0:(tgt_handler.c:2759:tgt_brw_write()) cfs_fail_timeout id 224 awake [ 2651.722255] Lustre: DEBUG MARKER: == replay-single test 66a: AT: verify MDT service time adjusts with no early replies ========================================================== 16:23:24 (1713299004) [ 2674.048667] LustreError: 4939:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 5000ms [ 2679.051092] LustreError: 4939:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 2689.705144] LustreError: 4939:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 2702.405185] Lustre: DEBUG MARKER: == replay-single test 66b: AT: verify net latency adjusts ========================================================== 16:24:14 (1713299054) [ 2787.000846] Lustre: DEBUG MARKER: == replay-single test 67a: AT: verify slow request processing doesn't induce reconnects ========================================================== 16:25:39 (1713299139) [ 2809.208128] LustreError: 7188:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 400ms [ 2809.212499] LustreError: 7188:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 1 previous similar message [ 2809.616142] LustreError: 7188:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 2825.378848] LustreError: 4938:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 400ms [ 2825.386549] LustreError: 4938:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 38 previous similar messages [ 2825.791173] LustreError: 4938:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 2825.796976] LustreError: 4938:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 38 previous similar messages [ 2857.648964] LustreError: 4937:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 400ms [ 2857.651782] LustreError: 4937:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 94 previous similar messages [ 2858.054103] LustreError: 4937:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 2858.056375] LustreError: 4937:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 93 previous similar messages [ 2861.396095] Lustre: DEBUG MARKER: == replay-single test 67b: AT: verify instant slowdown doesn't induce reconnects ========================================================== 16:26:53 (1713299213) [ 2884.903093] Lustre: DEBUG MARKER: phase 2 [ 2889.612104] Lustre: DEBUG MARKER: == replay-single test 68: AT: verify slowing locks ======= 16:27:22 (1713299242) [ 2960.598157] Lustre: DEBUG MARKER: == replay-single test 70a: check multi client t-f ======== 16:28:33 (1713299313) [ 2961.041407] Lustre: DEBUG MARKER: SKIP: replay-single test_70a Need two or more clients, have 1 [ 2963.817248] Lustre: DEBUG MARKER: == replay-single test 70b: dbench 1mdts recovery; 1 clients ========================================================== 16:28:36 (1713299316) [ 2965.535205] Lustre: DEBUG MARKER: Started rundbench load pid=14070 ... [ 2967.598954] LustreError: 20505:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 2967.601398] LustreError: 20505:0:(osd_handler.c:698:osd_ro()) Skipped 2 previous similar messages [ 2967.944959] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2969.529354] Lustre: DEBUG MARKER: test_70b fail mds1 1 times [ 2970.241573] Lustre: Failing over lustre-MDT0000 [ 2970.284168] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.30@tcp (stopping) [ 2970.346242] Lustre: server umount lustre-MDT0000 complete [ 2982.781552] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2982.786729] LustreError: Skipped 15 previous similar messages [ 2982.913897] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2982.920610] Lustre: Skipped 35 previous similar messages [ 2982.951134] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2982.954602] Lustre: Skipped 20 previous similar messages [ 2984.046988] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2987.949257] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 2987.953833] Lustre: Skipped 34 previous similar messages [ 2988.947276] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713299325/real 1713299325] req@ffff88008a975880 x1796521469451072/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713299341 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 2988.967508] Lustre: 3027:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 62 previous similar messages [ 2995.720303] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 2995.724459] Lustre: Skipped 18 previous similar messages [ 2996.090906] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 2996.096649] Lustre: Skipped 18 previous similar messages [ 2996.122728] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5439 to 0x280000400:5473) [ 2996.122793] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5492 to 0x240000400:5537) [ 2996.991578] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2997.584866] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3001.348860] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3002.948493] Lustre: DEBUG MARKER: test_70b fail mds1 2 times [ 3003.684323] Lustre: Failing over lustre-MDT0000 [ 3003.839286] Lustre: server umount lustre-MDT0000 complete [ 3016.136172] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3016.138982] Lustre: Skipped 6 previous similar messages [ 3017.006926] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3021.081498] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5527 to 0x280000400:5569) [ 3021.081501] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5590 to 0x240000400:5633) [ 3021.657662] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3022.041798] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3025.560826] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3027.062033] Lustre: DEBUG MARKER: test_70b fail mds1 3 times [ 3027.723336] Lustre: Failing over lustre-MDT0000 [ 3027.732320] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.30@tcp (stopping) [ 3027.818468] Lustre: server umount lustre-MDT0000 complete [ 3040.479738] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3046.113395] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5690 to 0x240000400:5729) [ 3046.113397] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5626 to 0x280000400:5665) [ 3046.672092] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3047.048491] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3050.350840] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3051.743906] Lustre: DEBUG MARKER: test_70b fail mds1 4 times [ 3052.236525] Lustre: Failing over lustre-MDT0000 [ 3052.326745] Lustre: server umount lustre-MDT0000 complete [ 3064.987160] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3071.117340] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5731 to 0x280000400:5761) [ 3071.117366] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5795 to 0x240000400:5825) [ 3071.649308] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3072.020688] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3075.398780] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3076.781120] Lustre: DEBUG MARKER: test_70b fail mds1 5 times [ 3077.262387] Lustre: Failing over lustre-MDT0000 [ 3077.366666] Lustre: server umount lustre-MDT0000 complete [ 3089.857462] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3096.263177] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:5891 to 0x240000400:5985) [ 3096.263184] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5826 to 0x280000400:5857) [ 3096.798118] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3097.167927] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3100.382512] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3101.751629] Lustre: DEBUG MARKER: test_70b fail mds1 6 times [ 3102.246457] Lustre: Failing over lustre-MDT0000 [ 3102.339031] Lustre: server umount lustre-MDT0000 complete [ 3114.171772] Lustre: lustre-MDT0000: Not available for connect from 0@lo (not set up) [ 3114.176027] Lustre: Skipped 1 previous similar message [ 3115.249940] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3121.399814] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:5928 to 0x280000400:5953) [ 3121.399823] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6056 to 0x240000400:6081) [ 3122.052851] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3122.450130] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3126.183248] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3127.590706] Lustre: DEBUG MARKER: test_70b fail mds1 7 times [ 3128.187326] Lustre: Failing over lustre-MDT0000 [ 3128.293436] Lustre: server umount lustre-MDT0000 complete [ 3141.332506] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3146.176585] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6010 to 0x280000400:6049) [ 3146.176680] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6139 to 0x240000400:6177) [ 3146.702178] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3147.070268] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3150.389505] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3151.814660] Lustre: DEBUG MARKER: test_70b fail mds1 8 times [ 3152.297323] Lustre: Failing over lustre-MDT0000 [ 3152.391597] Lustre: server umount lustre-MDT0000 complete [ 3164.926201] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3171.276125] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6241 to 0x240000400:6273) [ 3171.276224] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6113 to 0x280000400:6145) [ 3171.830361] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3172.312542] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3175.859527] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3177.249388] Lustre: DEBUG MARKER: test_70b fail mds1 9 times [ 3177.770911] Lustre: Failing over lustre-MDT0000 [ 3177.793956] LustreError: 18606:0:(ldlm_lockd.c:1499:ldlm_handle_enqueue()) ### lock on destroyed export ffff8800a6298000 ns: mdt-lustre-MDT0000_UUID lock: ffff88012b94e400/0xd52cb7857bc39064 lrc: 4/0,0 mode: CW/CW res: [0x20001a9e3:0x12ea:0x0].0x0 bits 0x5/0x0 rrc: 3 type: IBT gid 0 flags: 0x50386400000000 nid: 192.168.201.30@tcp remote: 0x2b920f73cd4d37b8 expref: 4 pid: 18606 timeout: 0 lvb_type: 0 [ 3177.803648] LustreError: 9888:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff880091163b80 x1796521470022272/t0(0) o105->lustre-MDT0000@192.168.201.30@tcp:15/16 lens 336/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 3177.807600] LustreError: 21854:0:(ldlm_resource.c:1128:ldlm_resource_complain()) mdt-lustre-MDT0000_UUID: namespace resource [0x20001a9e3:0x12ea:0x0].0x0 (ffff880092a7a200) refcount nonzero (2) after lock cleanup; forcing cleanup. [ 3179.115953] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3179.119657] Lustre: Skipped 1 previous similar message [ 3183.911584] Lustre: server umount lustre-MDT0000 complete [ 3196.424934] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3206.296137] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6330 to 0x240000400:6369) [ 3206.296145] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6203 to 0x280000400:6241) [ 3206.787496] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3207.138188] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3210.709368] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3212.179607] Lustre: DEBUG MARKER: test_70b fail mds1 10 times [ 3212.879096] Lustre: Failing over lustre-MDT0000 [ 3212.997201] Lustre: server umount lustre-MDT0000 complete [ 3226.868023] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3231.532423] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6434 to 0x240000400:6465) [ 3231.533198] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6306 to 0x280000400:6337) [ 3232.289249] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3232.658183] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3236.457533] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3237.880719] Lustre: DEBUG MARKER: test_70b fail mds1 11 times [ 3238.406416] Lustre: Failing over lustre-MDT0000 [ 3238.515687] Lustre: server umount lustre-MDT0000 complete [ 3251.124733] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3256.620972] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6396 to 0x280000400:6433) [ 3256.620976] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6523 to 0x240000400:6561) [ 3257.384793] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3257.888354] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3261.607157] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3263.081568] Lustre: DEBUG MARKER: test_70b fail mds1 12 times [ 3263.775735] Lustre: Failing over lustre-MDT0000 [ 3263.906329] Lustre: server umount lustre-MDT0000 complete [ 3277.227489] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3281.422202] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6618 to 0x240000400:6657) [ 3281.422215] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6490 to 0x280000400:6529) [ 3282.137760] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3282.704948] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3329.628782] Lustre: DEBUG MARKER: == replay-single test 70c: tar 1mdts recovery ============ 16:34:42 (1713299682) [ 3451.195275] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3461.724583] Lustre: DEBUG MARKER: test_70c fail mds1 1 times [ 3462.417442] Lustre: Failing over lustre-MDT0000 [ 3462.590454] Lustre: server umount lustre-MDT0000 complete [ 3475.548752] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3486.656562] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:9077 to 0x240000400:9121) [ 3486.656652] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:8949 to 0x280000400:8993) [ 3487.493634] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3487.916884] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3610.414917] LustreError: 2098:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 3610.418813] LustreError: 2098:0:(osd_handler.c:698:osd_ro()) Skipped 12 previous similar messages [ 3610.758240] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3621.335479] Lustre: DEBUG MARKER: test_70c fail mds1 2 times [ 3622.058091] Lustre: Failing over lustre-MDT0000 [ 3622.235976] Lustre: server umount lustre-MDT0000 complete [ 3634.663473] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3634.669448] LustreError: Skipped 12 previous similar messages [ 3634.763606] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3634.763832] Lustre: lustre-MDT0000: Not available for connect from 0@lo (not set up) [ 3634.763835] Lustre: Skipped 1 previous similar message [ 3634.772131] Lustre: Skipped 26 previous similar messages [ 3634.878373] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3634.879933] Lustre: Skipped 12 previous similar messages [ 3634.899613] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3634.901189] Lustre: Skipped 11 previous similar messages [ 3634.902071] mount.lustre (4045) used greatest stack depth: 9656 bytes left [ 3636.004758] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3639.853298] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.130@tcp (at 0@lo) [ 3639.859149] Lustre: Skipped 25 previous similar messages [ 3640.851236] Lustre: 3026:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713299976/real 1713299976] req@ffff88009257aa00 x1796521471980416/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713299992 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 3640.866727] Lustre: 3026:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 52 previous similar messages [ 3641.738175] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 3641.742723] Lustre: Skipped 12 previous similar messages [ 3647.068176] Lustre: lustre-MDT0000: Recovery over after 0:05, of 1 clients 1 recovered and 0 were evicted. [ 3647.070369] Lustre: Skipped 12 previous similar messages [ 3647.086710] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:10848 to 0x240000400:10881) [ 3647.087413] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:10720 to 0x280000400:10753) [ 3647.723289] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3648.170475] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3731.103663] Lustre: DEBUG MARKER: == replay-single test 70d: mkdir/rmdir striped dir 1mdts recovery ========================================================== 16:41:23 (1713300083) [ 3731.700141] Lustre: DEBUG MARKER: SKIP: replay-single test_70d needs >= 2 MDTs [ 3734.587819] Lustre: DEBUG MARKER: == replay-single test 70e: rename cross-MDT with random fails ========================================================== 16:41:26 (1713300086) [ 3735.130814] Lustre: DEBUG MARKER: SKIP: replay-single test_70e needs >= 2 MDTs [ 3737.943678] Lustre: DEBUG MARKER: == replay-single test 70f: OSS O_DIRECT recovery with 1 clients ========================================================== 16:41:30 (1713300090) [ 3742.685414] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 3744.261735] Lustre: DEBUG MARKER: test_70f failing OST 1 times [ 3744.961593] Lustre: Failing over lustre-OST0000 [ 3744.978939] Lustre: server umount lustre-OST0000 complete [ 3745.019889] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3745.026731] LustreError: Skipped 4 previous similar messages [ 3755.035777] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3755.045143] LustreError: Skipped 3 previous similar messages [ 3759.224465] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3761.954090] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3762.549554] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3770.196328] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 3771.772065] Lustre: DEBUG MARKER: test_70f failing OST 2 times [ 3772.503866] Lustre: Failing over lustre-OST0000 [ 3772.519827] Lustre: server umount lustre-OST0000 complete [ 3774.235846] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3774.243824] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3774.251534] LustreError: Skipped 1 previous similar message [ 3786.607396] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3789.361472] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3789.883135] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3797.649848] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 3799.242652] Lustre: DEBUG MARKER: test_70f failing OST 3 times [ 3799.973536] Lustre: Failing over lustre-OST0000 [ 3799.989760] Lustre: server umount lustre-OST0000 complete [ 3801.179755] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3807.006583] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.30@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3807.014850] LustreError: Skipped 7 previous similar messages [ 3814.068666] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3816.790544] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3817.243593] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3824.962967] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 3826.553325] Lustre: DEBUG MARKER: test_70f failing OST 4 times [ 3827.140471] Lustre: Failing over lustre-OST0000 [ 3827.150963] Lustre: server umount lustre-OST0000 complete [ 3840.786896] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3843.589638] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3844.022839] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3851.735644] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 3853.295630] Lustre: DEBUG MARKER: test_70f failing OST 5 times [ 3853.988585] Lustre: Failing over lustre-OST0000 [ 3854.003754] Lustre: server umount lustre-OST0000 complete [ 3867.819880] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3870.523079] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3870.937082] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3878.932574] Lustre: DEBUG MARKER: == replay-single test 71a: mkdir/rmdir striped dir with 2 mdts recovery ========================================================== 16:43:51 (1713300231) [ 3879.479859] Lustre: DEBUG MARKER: SKIP: replay-single test_71a needs >= 2 MDTs [ 3882.297080] Lustre: DEBUG MARKER: == replay-single test 73a: open(O_CREAT), unlink, replay, reconnect before open replay, close ========================================================== 16:43:54 (1713300234) [ 3883.764971] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3884.548379] Lustre: Failing over lustre-MDT0000 [ 3884.679567] Lustre: server umount lustre-MDT0000 complete [ 3897.153114] Lustre: *** cfs_fail_loc=302, val=2147483648*** [ 3897.900980] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3913.163836] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:53 [ 3913.199575] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11428 to 0x280000400:11457) [ 3913.199577] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11558 to 0x240000400:11585) [ 3913.752424] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3914.091184] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3918.021390] Lustre: DEBUG MARKER: == replay-single test 73b: open(O_CREAT), unlink, replay, reconnect at open_replay reply, close ========================================================== 16:44:30 (1713300270) [ 3918.929047] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3919.578380] Lustre: Failing over lustre-MDT0000 [ 3919.666590] Lustre: server umount lustre-MDT0000 complete [ 3932.216784] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3932.219979] LustreError: 14063:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009963ce00 x1796521491304000/t382252089347(382252089347) o101->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:690/0 lens 592/608 e 0 to 0 dl 1713300295 ref 1 fl Interpret:/204/0 rc 301/0 job:'multiop.0' uid:0 gid:0 [ 3932.725277] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3948.238173] Lustre: lustre-MDT0000: Client a6006888-f24c-4636-9a0e-7a69e8db8c11 (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:53 [ 3948.241728] Lustre: 14062:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880086ed8700 x1796521491304000/t382252089347(382252089347) o101->a6006888-f24c-4636-9a0e-7a69e8db8c11@192.168.201.30@tcp:706/0 lens 592/3488 e 0 to 0 dl 1713300311 ref 1 fl Interpret:/206/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 3948.266348] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11558 to 0x240000400:11617) [ 3948.266350] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11459 to 0x280000400:11489) [ 3948.738467] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3949.071245] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3953.028631] Lustre: DEBUG MARKER: == replay-single test 74: Ensure applications don't fail waiting for OST recovery ========================================================== 16:45:05 (1713300305) [ 3953.611450] Lustre: Failing over lustre-OST0000 [ 3953.632979] Lustre: server umount lustre-OST0000 complete [ 3954.563438] Lustre: Failing over lustre-MDT0000 [ 3954.664925] Lustre: server umount lustre-MDT0000 complete [ 3966.212447] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3966.217541] LustreError: Skipped 12 previous similar messages [ 3966.247346] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11459 to 0x280000400:11521) [ 3966.900504] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3969.177205] Lustre: lustre-OST0000: Denying connection for new client 96b99ed2-1802-44dd-b17c-d30be2b41983 (at 192.168.201.30@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 3969.180934] Lustre: Skipped 10 previous similar messages [ 3969.541257] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3970.021612] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11558 to 0x240000400:11649) [ 3975.052679] Lustre: DEBUG MARKER: == replay-single test 80a: DNE: create remote dir, drop update rep from MDT0, fail MDT0 ========================================================== 16:45:27 (1713300327) [ 3975.360328] Lustre: DEBUG MARKER: SKIP: replay-single test_80a needs >= 2 MDTs [ 3977.072039] Lustre: DEBUG MARKER: == replay-single test 80b: DNE: create remote dir, drop update rep from MDT0, fail MDT1 ========================================================== 16:45:29 (1713300329) [ 3977.404131] Lustre: DEBUG MARKER: SKIP: replay-single test_80b needs >= 2 MDTs [ 3979.116456] Lustre: DEBUG MARKER: == replay-single test 80c: DNE: create remote dir, drop update rep from MDT1, fail MDT[0,1] ========================================================== 16:45:31 (1713300331) [ 3979.441576] Lustre: DEBUG MARKER: SKIP: replay-single test_80c needs >= 2 MDTs [ 3981.163970] Lustre: DEBUG MARKER: == replay-single test 80d: DNE: create remote dir, drop update rep from MDT1, fail 2 MDTs ========================================================== 16:45:33 (1713300333) [ 3981.477917] Lustre: DEBUG MARKER: SKIP: replay-single test_80d needs >= 2 MDTs [ 3983.174882] Lustre: DEBUG MARKER: == replay-single test 80e: DNE: create remote dir, drop MDT1 rep, fail MDT0 ========================================================== 16:45:35 (1713300335) [ 3983.485027] Lustre: DEBUG MARKER: SKIP: replay-single test_80e needs >= 2 MDTs [ 3985.176231] Lustre: DEBUG MARKER: == replay-single test 80f: DNE: create remote dir, drop MDT1 rep, fail MDT1 ========================================================== 16:45:37 (1713300337) [ 3985.498534] Lustre: DEBUG MARKER: SKIP: replay-single test_80f needs >= 2 MDTs [ 3987.165287] Lustre: DEBUG MARKER: == replay-single test 80g: DNE: create remote dir, drop MDT1 rep, fail MDT0, then MDT1 ========================================================== 16:45:39 (1713300339) [ 3987.473170] Lustre: DEBUG MARKER: SKIP: replay-single test_80g needs >= 2 MDTs [ 3989.152750] Lustre: DEBUG MARKER: == replay-single test 80h: DNE: create remote dir, drop MDT1 rep, fail 2 MDTs ========================================================== 16:45:41 (1713300341) [ 3989.472010] Lustre: DEBUG MARKER: SKIP: replay-single test_80h needs >= 2 MDTs [ 3991.158821] Lustre: DEBUG MARKER: == replay-single test 81a: DNE: unlink remote dir, drop MDT0 update rep, fail MDT1 ========================================================== 16:45:43 (1713300343) [ 3991.475099] Lustre: DEBUG MARKER: SKIP: replay-single test_81a needs >= 2 MDTs [ 3993.171133] Lustre: DEBUG MARKER: == replay-single test 81b: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0 ========================================================== 16:45:45 (1713300345) [ 3993.474735] Lustre: DEBUG MARKER: SKIP: replay-single test_81b needs >= 2 MDTs [ 3996.064453] Lustre: DEBUG MARKER: == replay-single test 81c: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0,MDT1 ========================================================== 16:45:48 (1713300348) [ 3996.606234] Lustre: DEBUG MARKER: SKIP: replay-single test_81c needs >= 2 MDTs [ 3999.425682] Lustre: DEBUG MARKER: == replay-single test 81d: DNE: unlink remote dir, drop MDT0 update reply, fail 2 MDTs ========================================================== 16:45:51 (1713300351) [ 3999.896258] Lustre: DEBUG MARKER: SKIP: replay-single test_81d needs >= 2 MDTs [ 4002.750257] Lustre: DEBUG MARKER: == replay-single test 81e: DNE: unlink remote dir, drop MDT1 req reply, fail MDT0 ========================================================== 16:45:55 (1713300355) [ 4003.312763] Lustre: DEBUG MARKER: SKIP: replay-single test_81e needs >= 2 MDTs [ 4006.129064] Lustre: DEBUG MARKER: == replay-single test 81f: DNE: unlink remote dir, drop MDT1 req reply, fail MDT1 ========================================================== 16:45:58 (1713300358) [ 4006.684053] Lustre: DEBUG MARKER: SKIP: replay-single test_81f needs >= 2 MDTs [ 4009.521123] Lustre: DEBUG MARKER: == replay-single test 81g: DNE: unlink remote dir, drop req reply, fail M0, then M1 ========================================================== 16:46:01 (1713300361) [ 4010.058460] Lustre: DEBUG MARKER: SKIP: replay-single test_81g needs >= 2 MDTs [ 4012.834827] Lustre: DEBUG MARKER: == replay-single test 81h: DNE: unlink remote dir, drop request reply, fail 2 MDTs ========================================================== 16:46:05 (1713300365) [ 4013.389390] Lustre: DEBUG MARKER: SKIP: replay-single test_81h needs >= 2 MDTs [ 4016.172695] Lustre: DEBUG MARKER: == replay-single test 84a: stale open during export disconnect ========================================================== 16:46:08 (1713300368) [ 4016.818428] Lustre: 27801:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 96b99ed2-1802-44dd-b17c-d30be2b41983 at adminstrative request [ 4022.341378] Lustre: DEBUG MARKER: == replay-single test 85a: check the cancellation of unused locks during recovery(IBITS) ========================================================== 16:46:14 (1713300374) [ 4024.606518] Lustre: Failing over lustre-MDT0000 [ 4024.750340] Lustre: server umount lustre-MDT0000 complete [ 4037.803605] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11573 to 0x280000400:11617) [ 4037.803612] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11701 to 0x240000400:11745) [ 4038.285150] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4041.025755] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4041.557635] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4046.766353] Lustre: DEBUG MARKER: == replay-single test 85b: check the cancellation of unused locks during recovery(EXTENT) ========================================================== 16:46:39 (1713300399) [ 4051.726616] Lustre: Failing over lustre-OST0000 [ 4051.740546] Lustre: server umount lustre-OST0000 complete [ 4052.171366] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4064.478317] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4066.680847] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4067.040508] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4072.701441] Lustre: DEBUG MARKER: == replay-single test 86: umount server after clear nid_stats should not hit LBUG ========================================================== 16:47:05 (1713300425) [ 4073.901904] Lustre: Failing over lustre-MDT0000 [ 4074.034091] Lustre: server umount lustre-MDT0000 complete [ 4075.625978] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11846 to 0x240000400:11873) [ 4075.625980] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11573 to 0x280000400:11649) [ 4076.384488] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4081.509579] Lustre: DEBUG MARKER: == replay-single test 87a: write replay ================== 16:47:13 (1713300433) [ 4083.178359] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4084.144445] Lustre: Failing over lustre-OST0000 [ 4084.161520] Lustre: server umount lustre-OST0000 complete [ 4085.611378] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4095.627388] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4095.631023] LustreError: Skipped 11 previous similar messages [ 4097.142182] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4099.935179] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4100.493417] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4104.693909] Lustre: DEBUG MARKER: == replay-single test 87b: write replay with changed data (checksum resend) ========================================================== 16:47:37 (1713300457) [ 4105.834514] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4107.777432] Lustre: Failing over lustre-OST0000 [ 4107.795459] Lustre: server umount lustre-OST0000 complete [ 4121.748358] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4122.033286] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.201.30@tcp inode [0x20002da91:0x5:0x0] object 0x240000400:11875 extent [0-1048575]: client csum 3f34d843, server csum 558892d5 [ 4124.628521] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4125.054606] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4130.789267] Lustre: DEBUG MARKER: == replay-single test 88: MDS should not assign same objid to different files ========================================================== 16:48:03 (1713300483) [ 4132.118071] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4133.176692] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4134.690468] Lustre: Failing over lustre-MDT0000 [ 4134.792684] Lustre: server umount lustre-MDT0000 complete [ 4136.025942] LustreError: 6787:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713300488 with bad export cookie 15360854212991826379 [ 4146.027043] Lustre: Failing over lustre-OST0000 [ 4146.045290] Lustre: server umount lustre-OST0000 complete [ 4161.032522] LustreError: 3023:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff880087fdd180 x1796521472626240/t0(0) o250->MGC192.168.201.130@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 4162.337770] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4163.370894] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11573 to 0x280000400:11681) [ 4176.800613] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4185.473384] Lustre: DEBUG MARKER: == replay-single test 89: no disk space leak on late ost connection ========================================================== 16:48:57 (1713300537) [ 4195.297407] Lustre: Failing over lustre-OST0000 [ 4195.313631] Lustre: server umount lustre-OST0000 complete [ 4196.817319] Lustre: Failing over lustre-MDT0000 [ 4196.942320] Lustre: server umount lustre-MDT0000 complete [ 4210.900413] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4211.752977] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11573 to 0x280000400:11713) [ 4211.831747] LustreError: 9889:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 192.168.201.30@tcp arrived at 1713300563 with bad export cookie 15360854212991831601 [ 4211.845055] LustreError: 9889:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 4215.041339] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4215.882222] Lustre: lustre-OST0000: Denying connection for new client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 1:08 [ 4235.918744] Lustre: lustre-OST0000: Denying connection for new client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:48 [ 4235.928730] Lustre: Skipped 3 previous similar messages [ 4270.974779] Lustre: lustre-OST0000: Denying connection for new client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:13 [ 4270.984690] Lustre: Skipped 6 previous similar messages [ 4284.668185] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 4284.671851] Lustre: 15453:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client d32a3fbb-f196-4a7a-999d-a8b49b31239f@ [ 4284.679461] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 4284.702547] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.201.130@tcp (at 0@lo) [ 4284.702591] Lustre: lustre-OST0000: Recovery over after 1:10, of 2 clients 1 recovered and 1 was evicted. [ 4284.702593] Lustre: Skipped 15 previous similar messages [ 4284.714806] Lustre: Skipped 22 previous similar messages [ 4284.717951] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11924 to 0x240000400:11945) [ 4286.599332] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 67 sec [ 4295.921364] Lustre: DEBUG MARKER: free_before: 7517184 free_after: 7517184 [ 4300.130956] Lustre: DEBUG MARKER: == replay-single test 90: lfs find identifies the missing striped file segments ========================================================== 16:50:52 (1713300652) [ 4301.468860] Lustre: Failing over lustre-OST0000 [ 4301.481752] Lustre: server umount lustre-OST0000 complete [ 4304.747585] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4304.752354] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4304.759285] Lustre: Skipped 20 previous similar messages [ 4314.224318] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 4314.229288] Lustre: Skipped 18 previous similar messages [ 4314.234728] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 4314.238903] Lustre: Skipped 16 previous similar messages [ 4314.239584] mount.lustre (18394) used greatest stack depth: 9608 bytes left [ 4315.927597] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4315.952573] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4315.957419] Lustre: Skipped 16 previous similar messages [ 4321.884950] Lustre: DEBUG MARKER: == replay-single test 93a: replay + reconnect ============ 16:51:14 (1713300674) [ 4323.324901] Lustre: Failing over lustre-OST0000 [ 4323.356785] Lustre: server umount lustre-OST0000 complete [ 4337.142208] LustreError: 20687:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 sleeping for 40000ms [ 4337.768234] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4343.093088] Lustre: *** cfs_fail_loc=715, val=40*** [ 4353.107841] Lustre: lustre-OST0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnected, waiting for 2 clients in recovery for 0:52 [ 4353.140202] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713300689/real 1713300689] req@ffff880085ac4e00 x1796521472654016/t0(0) o400->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713300705 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 4353.154965] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 24 previous similar messages [ 4359.131116] Lustre: *** cfs_fail_loc=715, val=40*** [ 4359.134130] Lustre: Skipped 1 previous similar message [ 4369.116430] Lustre: lustre-OST0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnected, waiting for 2 clients in recovery for 0:36 [ 4369.123275] Lustre: Skipped 1 previous similar message [ 4375.131127] Lustre: *** cfs_fail_loc=715, val=40*** [ 4375.132680] Lustre: Skipped 1 previous similar message [ 4377.147108] LustreError: 20687:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 awake [ 4377.715009] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4378.085783] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4382.734307] Lustre: DEBUG MARKER: == replay-single test 93b: replay + reconnect on mds ===== 16:52:15 (1713300735) [ 4383.878836] Lustre: Failing over lustre-MDT0000 [ 4383.998340] Lustre: server umount lustre-MDT0000 complete [ 4395.887611] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4395.890248] LustreError: Skipped 7 previous similar messages [ 4396.841685] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4398.616518] LustreError: 23251:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 sleeping for 80000ms [ 4404.635239] Lustre: *** cfs_fail_loc=715, val=80*** [ 4404.638041] Lustre: Skipped 1 previous similar message [ 4414.202490] Lustre: lustre-MDT0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:53 [ 4414.211086] Lustre: Skipped 1 previous similar message [ 4420.219198] Lustre: *** cfs_fail_loc=715, val=80*** [ 4430.216560] Lustre: lustre-MDT0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:37 [ 4436.235123] Lustre: *** cfs_fail_loc=715, val=80*** [ 4446.225366] Lustre: lustre-MDT0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:21 [ 4452.251195] Lustre: *** cfs_fail_loc=715, val=80*** [ 4478.238622] Lustre: lustre-MDT0000: Recovery already passed deadline 0:10. If you do not want to wait more, you may force taget eviction via 'lctl --device lustre-MDT0000 abort_recovery. [ 4478.623157] LustreError: 23251:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 awake [ 4478.640113] Lustre: 23251:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 4478.645033] LustreError: dumping log to /tmp/lustre-log.1713300830.23251 [ 4478.726050] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11959 to 0x240000400:11977) [ 4478.727382] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11726 to 0x280000400:11745) [ 4479.473076] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4480.043199] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4485.729279] Lustre: DEBUG MARKER: == replay-single test 100a: DNE: create striped dir, drop update rep from MDT1, fail MDT1 ========================================================== 16:53:58 (1713300838) [ 4486.262279] Lustre: DEBUG MARKER: SKIP: replay-single test_100a needs >= 2 MDTs [ 4489.107817] Lustre: DEBUG MARKER: == replay-single test 100b: DNE: create striped dir, fail MDT0 ========================================================== 16:54:01 (1713300841) [ 4489.655932] Lustre: DEBUG MARKER: SKIP: replay-single test_100b needs >= 2 MDTs [ 4491.837742] Lustre: DEBUG MARKER: == replay-single test 100c: DNE: create striped dir, abort_recov_mdt mds2 ========================================================== 16:54:04 (1713300844) [ 4492.169775] Lustre: DEBUG MARKER: SKIP: replay-single test_100c needs >= 2 MDTs [ 4495.015593] Lustre: DEBUG MARKER: == replay-single test 100d: DNE: cancel update logs upon recovery abort ========================================================== 16:54:07 (1713300847) [ 4495.550848] Lustre: DEBUG MARKER: SKIP: replay-single test_100d needs > 1 MDTs [ 4498.243741] Lustre: DEBUG MARKER: == replay-single test 100e: DNE: create striped dir on MDT0 and MDT1, fail MDT0, MDT1 ========================================================== 16:54:10 (1713300850) [ 4498.747751] Lustre: DEBUG MARKER: SKIP: replay-single test_100e needs >= 2 MDTs [ 4501.519206] Lustre: DEBUG MARKER: == replay-single test 101: Shouldn't reassign precreated objs to other files after recovery ========================================================== 16:54:13 (1713300853) [ 4502.681069] LustreError: 27532:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 4502.687764] LustreError: 27532:0:(osd_handler.c:698:osd_ro()) Skipped 11 previous similar messages [ 4503.010917] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4511.123605] Lustre: Failing over lustre-MDT0000 [ 4511.195553] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4511.198274] Lustre: Skipped 2 previous similar messages [ 4511.277729] Lustre: server umount lustre-MDT0000 complete [ 4514.324574] Lustre: lustre-MDT0000: Aborting client recovery [ 4514.325952] LustreError: 29061:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 4514.328241] Lustre: 29196:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4514.331725] Lustre: 29196:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 4514.334705] Lustre: 29196:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client ac541b09-67cb-4a7a-9531-c322d5205a8f@ [ 4514.339881] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 4514.353448] Lustre: lustre-MDT0000-osd: cancel update llog [0x20001a210:0x1:0x0] [ 4514.389211] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11979 to 0x240000400:12521) [ 4514.389228] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11726 to 0x280000400:12289) [ 4515.267728] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4542.998637] Lustre: DEBUG MARKER: == replay-single test 102a: check resend (request lost) with multiple modify RPCs in flight ========================================================== 16:54:55 (1713300895) [ 4543.539171] Lustre: *** cfs_fail_loc=159, val=0*** [ 4543.542916] Lustre: Skipped 1 previous similar message [ 4559.539059] Lustre: lustre-MDT0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnecting [ 4559.543263] Lustre: Skipped 2 previous similar messages [ 4563.785637] Lustre: DEBUG MARKER: == replay-single test 102b: check resend (reply lost) with multiple modify RPCs in flight ========================================================== 16:55:16 (1713300916) [ 4564.216802] Lustre: *** cfs_fail_loc=15a, val=0*** [ 4564.221085] Lustre: Skipped 1 previous similar message [ 4580.220379] Lustre: 29068:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012c4d1f80 x1796521492886656/t416611831802(0) o36->ac541b09-67cb-4a7a-9531-c322d5205a8f@192.168.201.30@tcp:583/0 lens 488/3152 e 0 to 0 dl 1713300943 ref 1 fl Interpret:/202/0 rc 0/0 job:'chmod.0' uid:0 gid:0 [ 4580.233301] Lustre: 29068:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 6 previous similar messages [ 4584.515486] Lustre: DEBUG MARKER: == replay-single test 102c: check replay w/o reconstruction with multiple mod RPCs in flight ========================================================== 16:55:36 (1713300936) [ 4586.034430] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4588.476640] Lustre: Failing over lustre-MDT0000 [ 4588.631465] Lustre: server umount lustre-MDT0000 complete [ 4602.245616] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4602.397632] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13033 to 0x240000400:13065) [ 4602.398319] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12800 to 0x280000400:12833) [ 4604.876940] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4605.229915] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4610.915165] Lustre: DEBUG MARKER: == replay-single test 102d: check replay [ 4613.703194] Lustre: Failing over lustre-MDT0000 [ 4613.822751] Lustre: server umount lustre-MDT0000 complete [ 4627.221605] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4627.447230] Lustre: 4806:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008dce8380 x1796521492911232/t420906795057(0) o36->ac541b09-67cb-4a7a-9531-c322d5205a8f@192.168.201.30@tcp:630/0 lens 488/3152 e 0 to 0 dl 1713300990 ref 1 fl Interpret:/202/0 rc 0/0 job:'chmod.0' uid:0 gid:0 [ 4627.457626] Lustre: 4806:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 4627.460964] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12837 to 0x280000400:12865) [ 4627.461759] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13070 to 0x240000400:13097) [ 4629.980635] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4630.548171] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4636.457205] Lustre: DEBUG MARKER: == replay-single test 103: Check otr_next_id overflow ==== 16:56:28 (1713300988) [ 4637.607377] Lustre: Failing over lustre-MDT0000 [ 4637.696903] Lustre: server umount lustre-MDT0000 complete [ 4649.957728] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4651.761435] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13113 to 0x240000400:13129) [ 4651.761462] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12881 to 0x280000400:12897) [ 4652.607775] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4653.171658] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4658.892360] Lustre: DEBUG MARKER: == replay-single test 110a: DNE: create striped dir, fail MDT1 ========================================================== 16:56:51 (1713301011) [ 4659.428403] Lustre: DEBUG MARKER: SKIP: replay-single test_110a needs >= 2 MDTs [ 4662.348343] Lustre: DEBUG MARKER: == replay-single test 110b: DNE: create striped dir, fail MDT1 and client ========================================================== 16:56:54 (1713301014) [ 4662.916040] Lustre: DEBUG MARKER: SKIP: replay-single test_110b needs >= 2 MDTs [ 4665.555791] Lustre: DEBUG MARKER: == replay-single test 110c: DNE: create striped dir, fail MDT2 ========================================================== 16:56:58 (1713301018) [ 4666.116600] Lustre: DEBUG MARKER: SKIP: replay-single test_110c needs >= 2 MDTs [ 4669.012605] Lustre: DEBUG MARKER: == replay-single test 110d: DNE: create striped dir, fail MDT2 and client ========================================================== 16:57:01 (1713301021) [ 4669.576594] Lustre: DEBUG MARKER: SKIP: replay-single test_110d needs >= 2 MDTs [ 4672.258419] Lustre: DEBUG MARKER: == replay-single test 110e: DNE: create striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 16:57:04 (1713301024) [ 4672.691753] Lustre: DEBUG MARKER: SKIP: replay-single test_110e needs >= 2 MDTs [ 4674.145790] Lustre: DEBUG MARKER: SKIP: replay-single test_110f skipping excluded test 110f [ 4676.010037] Lustre: DEBUG MARKER: == replay-single test 110g: DNE: create striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 16:57:08 (1713301028) [ 4676.581100] Lustre: DEBUG MARKER: SKIP: replay-single test_110g needs >= 2 MDTs [ 4679.463926] Lustre: DEBUG MARKER: == replay-single test 111a: DNE: unlink striped dir, fail MDT1 ========================================================== 16:57:11 (1713301031) [ 4680.013464] Lustre: DEBUG MARKER: SKIP: replay-single test_111a needs >= 2 MDTs [ 4682.875325] Lustre: DEBUG MARKER: == replay-single test 111b: DNE: unlink striped dir, fail MDT2 ========================================================== 16:57:15 (1713301035) [ 4683.445138] Lustre: DEBUG MARKER: SKIP: replay-single test_111b needs >= 2 MDTs [ 4686.218892] Lustre: DEBUG MARKER: == replay-single test 111c: DNE: unlink striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 16:57:18 (1713301038) [ 4686.760048] Lustre: DEBUG MARKER: SKIP: replay-single test_111c needs >= 2 MDTs [ 4689.627184] Lustre: DEBUG MARKER: == replay-single test 111d: DNE: unlink striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 16:57:22 (1713301042) [ 4690.113723] Lustre: DEBUG MARKER: SKIP: replay-single test_111d needs >= 2 MDTs [ 4692.957897] Lustre: DEBUG MARKER: == replay-single test 111e: DNE: unlink striped dir, uncommit on MDT2, fail MDT1/MDT2 ========================================================== 16:57:25 (1713301045) [ 4693.519375] Lustre: DEBUG MARKER: SKIP: replay-single test_111e needs >= 2 MDTs [ 4696.347401] Lustre: DEBUG MARKER: == replay-single test 111f: DNE: unlink striped dir, uncommit on MDT1, fail MDT1/MDT2 ========================================================== 16:57:28 (1713301048) [ 4696.881704] Lustre: DEBUG MARKER: SKIP: replay-single test_111f needs >= 2 MDTs [ 4699.606632] Lustre: DEBUG MARKER: == replay-single test 111g: DNE: unlink striped dir, fail MDT1/MDT2 ========================================================== 16:57:32 (1713301052) [ 4700.113720] Lustre: DEBUG MARKER: SKIP: replay-single test_111g needs >= 2 MDTs [ 4702.646323] Lustre: DEBUG MARKER: == replay-single test 112a: DNE: cross MDT rename, fail MDT1 ========================================================== 16:57:35 (1713301055) [ 4703.195510] Lustre: DEBUG MARKER: SKIP: replay-single test_112a needs >= 4 MDTs [ 4705.868931] Lustre: DEBUG MARKER: == replay-single test 112b: DNE: cross MDT rename, fail MDT2 ========================================================== 16:57:38 (1713301058) [ 4706.364361] Lustre: DEBUG MARKER: SKIP: replay-single test_112b needs >= 4 MDTs [ 4708.842210] Lustre: DEBUG MARKER: == replay-single test 112c: DNE: cross MDT rename, fail MDT3 ========================================================== 16:57:41 (1713301061) [ 4709.350072] Lustre: DEBUG MARKER: SKIP: replay-single test_112c needs >= 4 MDTs [ 4712.189885] Lustre: DEBUG MARKER: == replay-single test 112d: DNE: cross MDT rename, fail MDT4 ========================================================== 16:57:44 (1713301064) [ 4712.752305] Lustre: DEBUG MARKER: SKIP: replay-single test_112d needs >= 4 MDTs [ 4715.432996] Lustre: DEBUG MARKER: == replay-single test 112e: DNE: cross MDT rename, fail MDT1 and MDT2 ========================================================== 16:57:47 (1713301067) [ 4715.845933] Lustre: DEBUG MARKER: SKIP: replay-single test_112e needs >= 4 MDTs [ 4718.706810] Lustre: DEBUG MARKER: == replay-single test 112f: DNE: cross MDT rename, fail MDT1 and MDT3 ========================================================== 16:57:51 (1713301071) [ 4719.248232] Lustre: DEBUG MARKER: SKIP: replay-single test_112f needs >= 4 MDTs [ 4721.990817] Lustre: DEBUG MARKER: == replay-single test 112g: DNE: cross MDT rename, fail MDT1 and MDT4 ========================================================== 16:57:54 (1713301074) [ 4722.508659] Lustre: DEBUG MARKER: SKIP: replay-single test_112g needs >= 4 MDTs [ 4724.978070] Lustre: DEBUG MARKER: == replay-single test 112h: DNE: cross MDT rename, fail MDT2 and MDT3 ========================================================== 16:57:57 (1713301077) [ 4725.324688] Lustre: DEBUG MARKER: SKIP: replay-single test_112h needs >= 4 MDTs [ 4727.940679] Lustre: DEBUG MARKER: == replay-single test 112i: DNE: cross MDT rename, fail MDT2 and MDT4 ========================================================== 16:58:00 (1713301080) [ 4728.427585] Lustre: DEBUG MARKER: SKIP: replay-single test_112i needs >= 4 MDTs [ 4731.208369] Lustre: DEBUG MARKER: == replay-single test 112j: DNE: cross MDT rename, fail MDT3 and MDT4 ========================================================== 16:58:03 (1713301083) [ 4731.723133] Lustre: DEBUG MARKER: SKIP: replay-single test_112j needs >= 4 MDTs [ 4734.419095] Lustre: DEBUG MARKER: == replay-single test 112k: DNE: cross MDT rename, fail MDT1,MDT2,MDT3 ========================================================== 16:58:06 (1713301086) [ 4734.938847] Lustre: DEBUG MARKER: SKIP: replay-single test_112k needs >= 4 MDTs [ 4737.449610] Lustre: DEBUG MARKER: == replay-single test 112l: DNE: cross MDT rename, fail MDT1,MDT2,MDT4 ========================================================== 16:58:09 (1713301089) [ 4737.978806] Lustre: DEBUG MARKER: SKIP: replay-single test_112l needs >= 4 MDTs [ 4740.781286] Lustre: DEBUG MARKER: == replay-single test 112m: DNE: cross MDT rename, fail MDT1,MDT3,MDT4 ========================================================== 16:58:13 (1713301093) [ 4741.297675] Lustre: DEBUG MARKER: SKIP: replay-single test_112m needs >= 4 MDTs [ 4744.051515] Lustre: DEBUG MARKER: == replay-single test 112n: DNE: cross MDT rename, fail MDT2,MDT3,MDT4 ========================================================== 16:58:16 (1713301096) [ 4744.538535] Lustre: DEBUG MARKER: SKIP: replay-single test_112n needs >= 4 MDTs [ 4746.586613] Lustre: DEBUG MARKER: == replay-single test 115: failover for create/unlink striped directory ========================================================== 16:58:19 (1713301099) [ 4747.113886] Lustre: DEBUG MARKER: SKIP: replay-single test_115 needs >= 2 MDTs [ 4749.216822] Lustre: DEBUG MARKER: == replay-single test 116a: large update log master MDT recovery ========================================================== 16:58:21 (1713301101) [ 4749.559413] Lustre: DEBUG MARKER: SKIP: replay-single test_116a needs >= 2 MDTs [ 4751.620487] Lustre: DEBUG MARKER: == replay-single test 116b: large update log slave MDT recovery ========================================================== 16:58:24 (1713301104) [ 4751.972494] Lustre: DEBUG MARKER: SKIP: replay-single test_116b needs >= 2 MDTs [ 4754.619369] Lustre: DEBUG MARKER: == replay-single test 117: DNE: cross MDT unlink, fail MDT1 and MDT2 ========================================================== 16:58:27 (1713301107) [ 4755.174828] Lustre: DEBUG MARKER: SKIP: replay-single test_117 needs >= 4 MDTs [ 4757.706884] Lustre: DEBUG MARKER: == replay-single test 118: invalidate osp update will not cause update log corruption ========================================================== 16:58:30 (1713301110) [ 4758.077105] Lustre: DEBUG MARKER: SKIP: replay-single test_118 needs >= 2 MDTs [ 4760.830431] Lustre: DEBUG MARKER: == replay-single test 119: timeout of normal replay does not cause DNE replay fails ========================================================== 16:58:33 (1713301113) [ 4761.371597] Lustre: DEBUG MARKER: SKIP: replay-single test_119 needs >= 2 MDTs [ 4763.609519] Lustre: DEBUG MARKER: == replay-single test 120: DNE fail abort should stop both normal and DNE replay ========================================================== 16:58:35 (1713301115) [ 4764.171473] Lustre: DEBUG MARKER: SKIP: replay-single test_120 needs >= 2 MDTs [ 4767.001891] Lustre: DEBUG MARKER: == replay-single test 121: lock replay timed out and race ========================================================== 16:58:39 (1713301119) [ 4767.969332] Lustre: Failing over lustre-MDT0000 [ 4768.069127] Lustre: server umount lustre-MDT0000 complete [ 4770.999376] Lustre: *** cfs_fail_loc=721, val=0*** [ 4771.001498] Lustre: Skipped 3 previous similar messages [ 4772.252455] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4772.334296] Lustre: *** cfs_fail_loc=721, val=0*** [ 4772.336057] Lustre: Skipped 29 previous similar messages [ 4774.410810] Lustre: *** cfs_fail_loc=721, val=1*** [ 4774.412033] Lustre: Skipped 30 previous similar messages [ 4781.147252] Lustre: *** cfs_fail_loc=721, val=1*** [ 4781.147253] Lustre: *** cfs_fail_loc=721, val=1*** [ 4781.147262] Lustre: Skipped 17 previous similar messages [ 4781.150156] Lustre: Skipped 3 previous similar messages [ 4788.657536] Lustre: lustre-MDT0000: Client ac541b09-67cb-4a7a-9531-c322d5205a8f (at 192.168.201.30@tcp) reconnected, waiting for 1 clients in recovery for 0:53 [ 4788.664447] Lustre: Skipped 1 previous similar message [ 4788.670518] Lustre: *** cfs_fail_loc=721, val=1*** [ 4788.674334] Lustre: Skipped 17 previous similar messages [ 4788.677632] Lustre: 27750:0:(tgt_handler.c:715:process_req_last_xid()) @@@ unexpected xid=661ed416b0f00 != exp_last_xid=661ed416b0f7f, rc = -71 req@ffff88008b22ce00 x1796521492942592/t0(0) o101->ac541b09-67cb-4a7a-9531-c322d5205a8f@192.168.201.30@tcp:0/0 lens 328/0 e 0 to 0 dl 1713301135 ref 1 fl Interpret:/240/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 4788.690808] Lustre: 27750:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (11/5s); client may timeout req@ffff88008b22ce00 x1796521492942592/t0(0) o101->ac541b09-67cb-4a7a-9531-c322d5205a8f@192.168.201.30@tcp:0/0 lens 328/224 e 0 to 0 dl 1713301135 ref 1 fl Complete:/240/0 rc -71/-71 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 4789.689867] Lustre: *** cfs_fail_loc=721, val=0*** [ 4789.692102] Lustre: Skipped 16 previous similar messages [ 4789.713800] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12881 to 0x280000400:12929) [ 4789.713803] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13131 to 0x240000400:13161) [ 4795.012268] Lustre: DEBUG MARKER: == replay-single test 130a: DoM file create (setstripe) replay ========================================================== 16:59:07 (1713301147) [ 4796.447261] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4797.182463] Lustre: Failing over lustre-MDT0000 [ 4797.320917] Lustre: server umount lustre-MDT0000 complete [ 4810.758789] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4812.726844] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13131 to 0x240000400:13193) [ 4812.726846] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12881 to 0x280000400:12961) [ 4813.443844] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4813.970478] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4819.473917] Lustre: DEBUG MARKER: == replay-single test 130b: DoM file create (inherited) replay ========================================================== 16:59:32 (1713301172) [ 4820.692757] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4821.436564] Lustre: Failing over lustre-MDT0000 [ 4821.571965] Lustre: server umount lustre-MDT0000 complete [ 4835.349330] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4837.244054] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12881 to 0x280000400:12993) [ 4837.244057] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13131 to 0x240000400:13225) [ 4837.722712] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4838.066505] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4843.765473] Lustre: DEBUG MARKER: == replay-single test 131a: DoM file write lock replay === 16:59:56 (1713301196) [ 4845.223399] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4845.848323] Lustre: Failing over lustre-MDT0000 [ 4845.973511] Lustre: server umount lustre-MDT0000 complete [ 4859.507831] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4861.611599] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13131 to 0x240000400:13257) [ 4861.611609] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:12881 to 0x280000400:13025) [ 4862.149659] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4862.578316] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4867.183719] Lustre: DEBUG MARKER: SKIP: replay-single test_131b skipping excluded test 131b [ 4869.248734] Lustre: DEBUG MARKER: == replay-single test 132a: PFL new component instantiate replay ========================================================== 17:00:21 (1713301221) [ 4870.690975] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4871.551573] Lustre: Failing over lustre-MDT0000 [ 4871.687558] Lustre: server umount lustre-MDT0000 complete [ 4885.223734] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4887.292956] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 4887.296410] Lustre: Skipped 10 previous similar messages [ 4887.318322] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:13027 to 0x280000400:13057) [ 4887.318327] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13260 to 0x240000400:13289) [ 4888.073425] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4888.680973] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4889.085147] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 4889.088647] Lustre: Skipped 20 previous similar messages [ 4894.652154] Lustre: DEBUG MARKER: == replay-single test 133: check resend of ongoing requests for lwp during failover ========================================================== 17:00:47 (1713301247) [ 4895.222131] Lustre: DEBUG MARKER: SKIP: replay-single test_133 needs >= 2 MDTs [ 4897.808773] Lustre: DEBUG MARKER: == replay-single test 134: replay creation of a file created in a pool ========================================================== 17:00:50 (1713301250) [ 4904.816956] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4905.588554] Lustre: Failing over lustre-MDT0000 [ 4905.741206] Lustre: server umount lustre-MDT0000 complete [ 4918.571732] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4918.580483] Lustre: Skipped 22 previous similar messages [ 4918.634950] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4918.639404] Lustre: Skipped 11 previous similar messages [ 4918.668783] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4918.673584] Lustre: Skipped 13 previous similar messages [ 4919.838640] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4921.884134] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 4921.890189] Lustre: Skipped 10 previous similar messages [ 4921.962723] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:13060 to 0x280000400:13089) [ 4921.962726] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13260 to 0x240000400:13321) [ 4922.802434] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4923.381176] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4933.927418] Lustre: DEBUG MARKER: == replay-single test 135: Server failure in lock replay phase ========================================================== 17:01:26 (1713301286) [ 4934.587393] Lustre: Failing over lustre-OST0000 [ 4934.603987] Lustre: server umount lustre-OST0000 complete [ 4937.920412] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.30@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4937.928778] LustreError: Skipped 32 previous similar messages [ 4938.619495] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4948.734903] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4951.583044] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4952.156128] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4956.214848] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4957.033745] Lustre: Failing over lustre-OST0000 [ 4957.067932] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4957.084918] Lustre: server umount lustre-OST0000 complete [ 4959.043857] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing load_module ../libcfs/libcfs/libcfs [ 4962.589516] Lustre: *** cfs_fail_loc=32d, val=20*** [ 4962.680526] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4964.195205] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount REPLAY_LOCKS osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4964.774399] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in REPLAY_LOCKS state after 0 sec [ 4965.491507] Lustre: Failing over lustre-OST0000 [ 4965.494983] LustreError: 14932:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 4965.500032] Lustre: 14075:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4965.504971] Lustre: 14075:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 4965.509293] LustreError: 14075:0:(ofd_obd.c:1315:ofd_iocontrol()) lustre-OST0000: iocontrol from 'tgt_recover_0' cmd=c00866c1 _IOWR('f', 193, 8) unrecognized: rc = -25 [ 4965.517281] Lustre: 14075:0:(ofd_obd.c:557:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 4965.535699] Lustre: server umount lustre-OST0000 complete [ 4977.607702] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing load_module ../libcfs/libcfs/libcfs [ 4978.589162] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713301314/real 1713301314] req@ffff88013298ea00 x1796521472965184/t0(0) o400->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713301330 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 4978.606323] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 44 previous similar messages [ 4980.969164] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4983.599992] Lustre: lustre-OST0000: Not available for connect from 192.168.201.30@tcp (stopping) [ 4988.232117] Lustre: server umount lustre-OST0000 complete [ 4993.615649] Lustre: lustre-OST0001: Not available for connect from 192.168.201.30@tcp (stopping) [ 4993.620525] Lustre: Skipped 1 previous similar message [ 4993.707592] LustreError: 11-0: lustre-OST0001-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 4995.577365] Lustre: server umount lustre-OST0001 complete [ 4998.866553] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 4999.381986] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5002.664841] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 5002.839576] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5008.282252] Lustre: DEBUG MARKER: == replay-single test 136: MDS to disconnect all OSPs first, then cleanup ldlm ========================================================== 17:02:40 (1713301360) [ 5008.811686] Lustre: DEBUG MARKER: SKIP: replay-single test_136 needs > 2 MDTs [ 5011.320470] Lustre: DEBUG MARKER: == replay-single test 200: Dropping one OBD_PING should not cause disconnect ========================================================== 17:02:43 (1713301363) [ 5011.635556] Lustre: DEBUG MARKER: SKIP: replay-single test_200 Need remote client [ 5012.783641] Lustre: DEBUG MARKER: == replay-single test complete, duration 4931 sec ======== 17:02:45 (1713301365) [ 5015.942804] Lustre: Failing over lustre-MDT0000 [ 5016.113017] Lustre: server umount lustre-MDT0000 complete [ 5028.455982] LustreError: 166-1: MGC192.168.201.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5028.462413] LustreError: Skipped 10 previous similar messages [ 5029.413637] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5030.138332] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:13343 to 0x240000400:13385) [ 5030.142485] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:13060 to 0x280000400:13121) [ 5032.105404] Lustre: DEBUG MARKER: oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5032.689346] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5036.987129] Lustre: server umount lustre-MDT0000 complete [ 5038.438828] LustreError: 6787:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713301390 with bad export cookie 15360854212992047943 [ 5048.447183] Lustre: server umount lustre-OST0000 complete [ 5059.827623] Lustre: server umount lustre-OST0001 complete [ 5063.395313] Lustre: DEBUG MARKER: oleg130-server.virtnet: executing unload_modules_local [ 5064.102918] Key type lgssc unregistered [ 5064.186750] LNet: 24181:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5064.191424] LNet: Removed LNI 192.168.201.130@tcp