[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f5b30-0x000f5b3f] mapped at [ffffffffff200b30] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5950 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1bb7 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1a53 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01A13 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1ac7 00090 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1b57 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1b8f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 276944108 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.374402] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.376187] pid_max: default: 32768 minimum: 301 [ 0.377703] Security Framework initialized [ 0.378747] SELinux: Initializing. [ 0.380719] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.383557] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.385361] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.386986] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.389237] Initializing cgroup subsys memory [ 0.390357] Initializing cgroup subsys devices [ 0.391593] Initializing cgroup subsys freezer [ 0.392681] Initializing cgroup subsys net_cls [ 0.394064] Initializing cgroup subsys blkio [ 0.395168] Initializing cgroup subsys perf_event [ 0.396361] Initializing cgroup subsys hugetlb [ 0.397630] Initializing cgroup subsys pids [ 0.398537] Initializing cgroup subsys net_prio [ 0.399592] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.401912] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.402979] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.404121] tlb_flushall_shift: 6 [ 0.404849] FEATURE SPEC_CTRL Present [ 0.405643] FEATURE IBPB_SUPPORT Present [ 0.406575] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.408145] Spectre V2 : Vulnerable [ 0.408986] Speculative Store Bypass: Vulnerable [ 0.411438] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.418677] ACPI: Core revision 20130517 [ 0.420952] ACPI: All ACPI Tables successfully acquired [ 0.422634] ftrace: allocating 30294 entries in 119 pages [ 0.470323] Enabling x2apic [ 0.470865] Enabled x2apic [ 0.471700] Switched APIC routing to physical x2apic. [ 0.474093] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.475277] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.477354] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.479142] ... version: 2 [ 0.479879] ... bit width: 48 [ 0.480733] ... generic registers: 4 [ 0.481553] ... value mask: 0000ffffffffffff [ 0.482522] ... max period: 00007fffffffffff [ 0.483559] ... fixed-purpose events: 3 [ 0.484404] ... event mask: 000000070000000f [ 0.485540] KVM setup paravirtual spinlock [ 0.488250] smpboot: Booting Node 0, Processors #1[ 0.489466] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.491976] KVM setup async PF for cpu 1 [ 0.492977] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.494664] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.497080] KVM setup async PF for cpu 2 [ 0.497579] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.499009] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.500094] Brought up 4 CPUs [ 0.500131] KVM setup async PF for cpu 3 [ 0.500140] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.502314] smpboot: Max logical packages: 1 [ 0.503133] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.506120] devtmpfs: initialized [ 0.506907] x86/mm: Memory block size: 128MB [ 0.510677] EVM: security.selinux [ 0.511342] EVM: security.ima [ 0.511925] EVM: security.capability [ 0.514294] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.515839] NET: Registered protocol family 16 [ 0.517036] cpuidle: using governor haltpoll [ 0.518255] ACPI: bus type PCI registered [ 0.518996] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.520365] PCI: Using configuration type 1 for base access [ 0.521453] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.527753] ACPI: Added _OSI(Module Device) [ 0.528765] ACPI: Added _OSI(Processor Device) [ 0.529779] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.530720] ACPI: Added _OSI(Processor Aggregator Device) [ 0.531733] ACPI: Added _OSI(Linux-Dell-Video) [ 0.535633] ACPI: Interpreter enabled [ 0.536496] ACPI: (supports S0 S3 S4 S5) [ 0.537362] ACPI: Using IOAPIC for interrupt routing [ 0.538474] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.540527] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.547420] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.548760] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.550121] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.552008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.556064] acpiphp: Slot [2] registered [ 0.556892] acpiphp: Slot [3] registered [ 0.557655] acpiphp: Slot [4] registered [ 0.558490] acpiphp: Slot [5] registered [ 0.559323] acpiphp: Slot [6] registered [ 0.560139] acpiphp: Slot [7] registered [ 0.561010] acpiphp: Slot [8] registered [ 0.561897] acpiphp: Slot [9] registered [ 0.562680] acpiphp: Slot [10] registered [ 0.563466] acpiphp: Slot [11] registered [ 0.564314] acpiphp: Slot [12] registered [ 0.565156] acpiphp: Slot [13] registered [ 0.565931] acpiphp: Slot [14] registered [ 0.566727] acpiphp: Slot [15] registered [ 0.567535] acpiphp: Slot [16] registered [ 0.568324] acpiphp: Slot [17] registered [ 0.569143] acpiphp: Slot [18] registered [ 0.570012] acpiphp: Slot [19] registered [ 0.571010] acpiphp: Slot [20] registered [ 0.571925] acpiphp: Slot [21] registered [ 0.572863] acpiphp: Slot [22] registered [ 0.573811] acpiphp: Slot [23] registered [ 0.574704] acpiphp: Slot [24] registered [ 0.575605] acpiphp: Slot [25] registered [ 0.576540] acpiphp: Slot [26] registered [ 0.577496] acpiphp: Slot [27] registered [ 0.578508] acpiphp: Slot [28] registered [ 0.579441] acpiphp: Slot [29] registered [ 0.580195] acpiphp: Slot [30] registered [ 0.581139] acpiphp: Slot [31] registered [ 0.582066] PCI host bridge to bus 0000:00 [ 0.583017] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.584590] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.587384] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.590411] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.593294] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window] [ 0.596503] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.611031] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.612911] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.614352] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.615850] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.618255] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.619764] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.754570] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.756128] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.757579] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.759421] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.761002] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.764319] vgaarb: loaded [ 0.765232] SCSI subsystem initialized [ 0.766799] ACPI: bus type USB registered [ 0.768435] usbcore: registered new interface driver usbfs [ 0.769575] usbcore: registered new interface driver hub [ 0.770740] usbcore: registered new device driver usb [ 0.772146] PCI: Using ACPI for IRQ routing [ 0.773567] NetLabel: Initializing [ 0.774281] NetLabel: domain hash size = 128 [ 0.775158] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.776227] NetLabel: unlabeled traffic allowed by default [ 0.777819] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.778905] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.785344] amd_nb: Cannot enumerate AMD northbridges [ 0.787183] Switched to clocksource kvm-clock [ 0.808156] pnp: PnP ACPI init [ 0.809657] ACPI: bus type PNP registered [ 0.812087] pnp: PnP ACPI: found 6 devices [ 0.813527] ACPI: bus type PNP unregistered [ 0.825084] NET: Registered protocol family 2 [ 0.827049] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 0.829987] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 0.833194] TCP: Hash tables configured (established 32768 bind 32768) [ 0.835560] TCP: reno registered [ 0.836843] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 0.839162] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 0.841760] NET: Registered protocol family 1 [ 0.843700] RPC: Registered named UNIX socket transport module. [ 0.845260] RPC: Registered udp transport module. [ 0.846601] RPC: Registered tcp transport module. [ 0.847844] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 0.849527] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.851082] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.852334] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.853932] Unpacking initramfs... [ 2.102864] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 2.105391] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.106632] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 2.109702] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.111259] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.112326] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.113392] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.115889] cryptomgr_test (51) used greatest stack depth: 14128 bytes left [ 2.116337] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.116382] Initialise system trusted keyring [ 2.148134] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.149489] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.153865] zpool: loaded [ 2.154415] zbud: loaded [ 2.155224] VFS: Disk quotas dquot_6.6.0 [ 2.156056] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.157916] NFS: Registering the id_resolver key type [ 2.159274] Key type id_resolver registered [ 2.160470] Key type id_legacy registered [ 2.161649] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.163582] Key type big_key registered [ 2.165258] cryptomgr_test (57) used greatest stack depth: 13968 bytes left [ 2.167049] cryptomgr_test (59) used greatest stack depth: 13664 bytes left [ 2.169017] NET: Registered protocol family 38 [ 2.169925] Key type asymmetric registered [ 2.170672] Asymmetric key parser 'x509' registered [ 2.171703] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.173227] io scheduler noop registered [ 2.174008] io scheduler deadline registered (default) [ 2.175055] io scheduler cfq registered [ 2.175820] io scheduler mq-deadline registered [ 2.176688] io scheduler kyber registered [ 2.178979] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.180060] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.181469] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.182865] ACPI: Power Button [PWRF] [ 2.184068] GHES: HEST is not enabled! [ 2.232415] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.292740] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.404021] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 2.474259] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 2.629141] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 2.660891] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.687267] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.690335] Non-volatile memory driver v1.3 [ 2.691318] Linux agpgart interface v0.103 [ 2.692576] crash memory driver: version 1.1 [ 2.694593] nbd: registered device at major 43 [ 2.706759] virtio_blk virtio1: [vda] 67344 512-byte logical blocks (34.4 MB/32.8 MiB) [ 2.718004] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 2.727590] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.739626] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.751565] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 2.763255] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 2.771125] rdac: device handler registered [ 2.773159] hp_sw: device handler registered [ 2.774793] emc: device handler registered [ 2.776642] libphy: Fixed MDIO Bus: probed [ 2.780428] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.782827] ehci-pci: EHCI PCI platform driver [ 2.784409] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.786420] ohci-pci: OHCI PCI platform driver [ 2.787960] uhci_hcd: USB Universal Host Controller Interface driver [ 2.790272] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 2.794099] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.795812] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.797878] mousedev: PS/2 mouse device common for all mice [ 2.800486] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 2.800626] rtc_cmos 00:05: RTC can wake from S4 [ 2.801560] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 2.802127] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 2.806272] hidraw: raw HID events driver (C) Jiri Kosina [ 2.806574] usbcore: registered new interface driver usbhid [ 2.806575] usbhid: USB HID core driver [ 2.806673] drop_monitor: Initializing network drop monitor service [ 2.806729] Netfilter messages via NETLINK v0.30. [ 2.806808] TCP: cubic registered [ 2.806815] Initializing XFRM netlink socket [ 2.807214] NET: Registered protocol family 10 [ 2.807749] NET: Registered protocol family 17 [ 2.807800] Key type dns_resolver registered [ 2.808343] mce: Using 10 MCE banks [ 2.808718] Loading compiled-in X.509 certificates [ 2.809996] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 2.810043] registered taskstats version 1 [ 2.813546] modprobe (70) used greatest stack depth: 13456 bytes left [ 2.816635] Key type trusted registered [ 2.822004] Key type encrypted registered [ 2.822063] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 2.825276] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 2.828992] rtc_cmos 00:05: setting system clock to 2024-04-17 14:28:24 UTC (1713364104) [ 2.844747] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 2.847389] Write protecting the kernel read-only data: 12288k [ 2.849815] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 2.852298] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 2.860619] random: systemd: uninitialized urandom read (16 bytes read) [ 2.863813] random: systemd: uninitialized urandom read (16 bytes read) [ 2.866700] random: systemd: uninitialized urandom read (16 bytes read) [ 2.872109] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 2.877682] systemd[1]: Detected virtualization kvm. [ 2.880107] systemd[1]: Detected architecture x86-64. [ 2.882436] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 2.887414] systemd[1]: No hostname configured. [ 2.888562] systemd[1]: Set hostname to . [ 2.890393] random: systemd: uninitialized urandom read (16 bytes read) [ 2.892797] systemd[1]: Initializing machine ID from random generator. [ 2.946676] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 2.950995] random: systemd: uninitialized urandom read (16 bytes read) [ 2.953975] random: systemd: uninitialized urandom read (16 bytes read) [ 2.956775] random: systemd: uninitialized urandom read (16 bytes read) [ 2.958730] random: systemd: uninitialized urandom read (16 bytes read) [ 2.962414] random: systemd: uninitialized urandom read (16 bytes read) [ 2.964943] random: systemd: uninitialized urandom read (16 bytes read) [ 2.983360] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 2.988681] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 2.992355] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 2.996548] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 3.000741] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 3.005505] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 3.009836] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 3.015966] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 3.021422] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 3.028846] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 3.034230] systemd[1]: Starting Journal Service... Starting Journal Service... [ 3.039584] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 3.044267] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 3.048209] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 3.051674] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 3.056370] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 3.063031] systemd[1]: Started Create list of required static device nodes for the current kernel. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 3.069491] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 3.073390] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. Starting Create Static Device Nodes in /dev... Starting Apply Kernel Variables... [ OK ] Started Create Static Device Nodes in /dev. [ OK ] Started Apply Kernel Variables. [ 3.117257] tsc: Refined TSC clocksource calibration: 2399.975 MHz [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook...[ 3.267456] random: fast init done [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ 3.580459] scsi host0: ata_piix [ 3.582986] scsi host1: ata_piix [ 3.584494] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 3.587159] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Started udev Coldplug all Devices. [ OK ] Reached target System Initialization. Starting dracut initqueue hook... Starting Show Plymouth Boot Screen... %G[ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Basic System. [ 3.673946] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ 3.696217] ip (320) used greatest stack depth: 13080 bytes left [ 3.744175] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 3.746944] ip (343) used greatest stack depth: 12464 bytes left [ 3.781355] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 6.024605] dracut-initqueue[290]: RTNETLINK answers: File exists [ 6.212844] dracut-initqueue[290]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Mounting /sysroot... [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... [ 6.729626] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... Starting Plymouth switch root service... [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target Paths. [ OK ] Stopped target Sockets. [ OK ] Stopped target Slices. [ OK ] Stopped target System Initialization. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Local File Systems. [ OK ] Stopped target Swap. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped target Timers. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Stopped udev Kernel Device Manager. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Started Plymouth switch root service. [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. Starting Switch Root... [ 7.131411] systemd-journald[106]: Received SIGTERM from PID 1 (n/a). [ 7.166589] systemd-cgroups (546) used greatest stack depth: 12416 bytes left [ 7.289368] SELinux: Disabled at runtime. [ 7.352245] ip_tables: (C) 2000-2006 Netfilter Core Team [ 7.354962] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ 7.387940] rpc-pipefs-gene (558) used greatest stack depth: 11856 bytes left [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. Mounting Debug File System... [ OK ] Stopped target Switch Root. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Listening on udev Control Socket. Starting Create list of required st... nodes for the current kernel... [ OK ] Started Forward Password Requests to Wall Directory Watch. Mounting Huge Pages File System... [ OK ] Created slice system-getty.slice. [ OK ] Reached target rpc_pipefs.target. Mounting POSIX Message Queue File System... [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. [ OK ] Stopped target Initrd File Systems. [ OK ] Reached target Local Encrypted Volumes. Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Remount Root and Kernel File Systems... Starting Set Up Additional Binary Formats... Starting Load Kernel Modules... [ OK ] Listening on udev Kernel Socket. Starting udev Coldplug all Devices... [ OK ] Stopped target Initrd Root File System. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. [ OK ] Started Journal Service. [ OK ] Started Create list of required sta...ce nodes for the current kernel. Mounting Arbitrary Executable File Formats File System... Starting Create Static Device Nodes in /dev... [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Load Kernel Modules. Starting Apply Kernel Variables... [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. [ OK ] Started Set Up Additional Binary Formats. [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... Starting Flush Journal to Persistent Storage... Starting Configure read-only root support... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... [ OK ] Mounted /mnt. [ OK ] Started udev Coldplug all Devices. [ 7.771805] systemd-journald[566]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 7.909882] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 7.919762] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ 7.960554] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/vda. [ 7.979072] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ 7.990175] AVX version of gcm_enc/dec engaged. [ 7.992830] AES CTR mode by8 optimization enabled Mounting /home/green/git/lustre-release... [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 8.039780] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 8.046912] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ OK ] Mounted /home/green/git/lustre-release. [ 8.055998] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) %G[ 8.149907] EDAC MC: Ver: 3.0.0 [ 8.156372] EDAC sbridge: Ver: 1.1.2 [ 9.987307] mount.nfs (770) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Tell Plymouth To Write Out Runtime Data... Starting Rebuild Journal Catalog... Starting Mark the need to relabel after reboot... Starting Preprocess NFS configuration... Starting Create Volatile Files and Directories... Starting Load/Save Random Seed... [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Tell Plymouth To Write Out Runtime Data. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Load/Save Random Seed. Starting Update is Completed... Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Update is Completed. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. [ OK ] Started D-Bus System Message Bus. Starting Dump dmesg to /var/log/dmesg... Starting GSSAPI Proxy Daemon... Starting Login Service... Starting Network Manager... [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... Starting Network Manager Wait Online... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Wait for Plymouth Boot Screen to Quit... Starting Terminate Plymouth Boot Screen... [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg234-server login: [ 19.803116] device-mapper: uevent: version 1.0.3 [ 19.804623] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 23.890036] libcfs: loading out-of-tree module taints kernel. [ 23.891375] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 23.913681] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_hostid [ 28.561336] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing load_modules_local [ 28.743882] alg: No test for adler32 (adler32-zlib) [ 29.494296] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 29.612500] Lustre: Lustre: Build Version: 2.15.62_23_g1ea8c49 [ 29.764616] LNet: Added LNI 192.168.202.134@tcp [8/256/0/180] [ 29.766409] LNet: Accept secure, port 988 [ 31.306395] Key type lgssc registered [ 31.581001] Lustre: Echo OBD driver; http://www.lustre.org/ [ 34.260817] icp: module license 'CDDL' taints kernel. [ 34.262680] Disabling lock debugging due to kernel taint [ 36.750615] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 39.649380] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 44.037133] LDISKFS-fs (vdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 46.091617] LDISKFS-fs (vde): file extents enabled, maximum tree depth=5 [ 46.094951] LDISKFS-fs (vde): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 48.101358] LDISKFS-fs (vdf): file extents enabled, maximum tree depth=5 [ 48.104245] LDISKFS-fs (vdf): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 51.051932] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing load_modules_local [ 54.038149] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 54.056059] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 54.061092] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 55.131766] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 55.139458] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 55.172414] Lustre: lustre-MDT0000: new disk, initializing [ 55.191134] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 55.196619] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 55.221159] mount.lustre (6909) used greatest stack depth: 10256 bytes left [ 55.932428] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 59.952836] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 59.971030] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 59.987634] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 59.992650] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space. [ 59.994020] Lustre: Skipped 1 previous similar message [ 60.024643] Lustre: lustre-MDT0001: new disk, initializing [ 60.039188] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 60.047089] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 60.049788] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 60.763837] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 65.779289] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 65.785075] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 65.809158] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 65.815059] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 65.901686] Lustre: lustre-OST0000: new disk, initializing [ 65.904932] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 65.923557] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 67.712762] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 70.230340] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 70.236769] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 70.248446] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 72.986648] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 72.991644] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 72.996506] random: crng init done [ 73.015583] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 73.021065] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 73.061297] Lustre: lustre-OST0001: new disk, initializing [ 73.064321] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 73.082756] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 74.466131] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 79.204727] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 80.157192] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 80.159952] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 80.171527] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 87.528249] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 93.184274] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing check_logdir /tmp/testlogs/ [ 94.036567] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing yml_node [ 95.022077] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 95.660142] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 96.963232] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 98.050637] Lustre: DEBUG MARKER: -----============= acceptance-small: replay-single ============----- Wed Apr 17 10:29:58 EDT 2024 [ 100.840223] Lustre: DEBUG MARKER: excepting tests: 110f 131b 59 36 [ 101.501961] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing check_config_client /mnt/lustre [ 106.209070] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 107.056462] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 107.673299] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 109.521996] Lustre: DEBUG MARKER: == replay-single test 0a: empty replay =================== 10:30:10 (1713364210) [ 111.444801] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 111.931690] Lustre: Failing over lustre-MDT0000 [ 111.977492] Lustre: server umount lustre-MDT0000 complete [ 112.337430] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.202.34@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 115.117638] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 115.123383] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 115.132325] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 117.340676] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.202.34@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 117.345110] LustreError: Skipped 3 previous similar messages [ 120.221523] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 120.224436] LustreError: Skipped 3 previous similar messages [ 124.842383] LDISKFS-fs (dm-0): recovery complete [ 124.843657] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 124.853315] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 124.914603] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 125.684800] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 127.356609] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 129.918707] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 129.925238] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 130.472217] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 130.836249] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 134.968627] Lustre: DEBUG MARKER: == replay-single test 0b: ensure object created after recover exists. (3284) ========================================================== 10:30:35 (1713364235) [ 135.511713] Lustre: Failing over lustre-OST0000 [ 135.528699] Lustre: server umount lustre-OST0000 complete [ 137.380953] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.202.34@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 137.384766] LustreError: Skipped 1 previous similar message [ 139.933878] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 139.937942] Lustre: Skipped 4 previous similar messages [ 147.388327] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.202.34@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 147.392901] LustreError: Skipped 5 previous similar messages [ 147.515874] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 147.519136] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 147.558084] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 148.798416] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 148.834471] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 149.257471] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 149.257542] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.202.134@tcp (at 0@lo) [ 149.257544] Lustre: Skipped 3 previous similar messages [ 151.104900] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 151.493604] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 155.827244] Lustre: DEBUG MARKER: == replay-single test 0c: check replay-barrier =========== 10:30:56 (1713364256) [ 157.754634] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 158.307719] Lustre: Failing over lustre-MDT0000 [ 158.353503] Lustre: server umount lustre-MDT0000 complete [ 159.981618] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 159.983671] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 162.573659] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 162.577870] Lustre: Skipped 1 previous similar message [ 167.581497] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 167.585091] LustreError: Skipped 8 previous similar messages [ 171.234516] LDISKFS-fs (dm-0): recovery complete [ 171.235931] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 171.261288] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 171.317008] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 172.026525] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 172.550122] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 172.553663] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 176.334503] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 176.336288] Lustre: Skipped 1 previous similar message [ 177.564615] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 1:04 [ 182.572600] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 187.580647] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:54 [ 192.588549] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:49 [ 202.604481] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:39 [ 202.608818] Lustre: Skipped 1 previous similar message [ 222.636552] Lustre: lustre-MDT0000: Denying connection for new client e524b40b-3bd6-4146-b7b9-e9869cef9483 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:19 [ 222.644501] Lustre: Skipped 3 previous similar messages [ 242.430231] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 242.432225] Lustre: 21469:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 37e7c6e6-6ac2-4d59-b7d1-4d9fa9d35604@ [ 242.435624] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 242.438546] Lustre: lustre-MDT0000: Recovery over after 1:10, of 2 clients 1 recovered and 1 was evicted. [ 242.438605] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to (at 0@lo) [ 242.438606] Lustre: Skipped 2 previous similar messages [ 242.455700] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:44 to 0x2c0000401:65) [ 242.455702] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:44 to 0x280000401:65) [ 246.918925] Lustre: DEBUG MARKER: == replay-single test 0d: expired recovery with no clients ========================================================== 10:32:27 (1713364347) [ 248.842767] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 249.359081] Lustre: Failing over lustre-MDT0000 [ 249.419702] Lustre: server umount lustre-MDT0000 complete [ 251.453816] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 251.453994] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 251.461644] Lustre: Skipped 3 previous similar messages [ 262.547780] LDISKFS-fs (dm-0): recovery complete [ 262.550399] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 262.587026] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 262.659507] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 263.414430] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 263.951426] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 263.954236] Lustre: lustre-MDT0000: Denying connection for new client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp), waiting for 2 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 263.958522] Lustre: Skipped 3 previous similar messages [ 267.662451] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 329.052615] Lustre: lustre-MDT0000: Denying connection for new client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:04 [ 329.056658] Lustre: Skipped 12 previous similar messages [ 333.430295] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 333.432019] Lustre: 24036:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client e524b40b-3bd6-4146-b7b9-e9869cef9483@ [ 333.435687] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 333.438946] Lustre: lustre-MDT0000: Recovery over after 1:10, of 2 clients 1 recovered and 1 was evicted. [ 333.439039] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to (at 0@lo) [ 333.439040] Lustre: Skipped 2 previous similar messages [ 333.455807] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:44 to 0x2c0000401:97) [ 333.459059] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:44 to 0x280000401:97) [ 338.941807] Lustre: DEBUG MARKER: == replay-single test 1: simple create =================== 10:33:59 (1713364439) [ 340.960986] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 341.565992] Lustre: Failing over lustre-MDT0000 [ 341.615322] Lustre: server umount lustre-MDT0000 complete [ 342.781647] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 342.781885] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 342.781886] LustreError: Skipped 11 previous similar messages [ 342.795829] Lustre: Skipped 3 previous similar messages [ 355.300368] LDISKFS-fs (dm-0): recovery complete [ 355.302704] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 355.367983] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 355.462441] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 356.459381] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 359.102714] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 360.464705] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 360.489037] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 360.520747] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:44 to 0x280000401:129) [ 360.520775] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:44 to 0x2c0000401:129) [ 361.440164] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 362.039922] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 367.451447] Lustre: DEBUG MARKER: == replay-single test 2a: touch ========================== 10:34:28 (1713364468) [ 369.306010] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 369.814624] Lustre: Failing over lustre-MDT0000 [ 369.867154] Lustre: server umount lustre-MDT0000 complete [ 370.477709] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 370.484224] Lustre: Skipped 3 previous similar messages [ 382.939223] LDISKFS-fs (dm-0): recovery complete [ 382.940910] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 382.971471] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 383.876616] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 384.140870] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 388.076531] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 388.096166] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:131 to 0x280000401:161) [ 388.096169] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:44 to 0x2c0000401:161) [ 388.783643] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 389.245566] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 393.716124] Lustre: DEBUG MARKER: == replay-single test 2b: touch ========================== 10:34:54 (1713364494) [ 395.993253] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 396.552088] Lustre: Failing over lustre-MDT0000 [ 396.604614] Lustre: server umount lustre-MDT0000 complete [ 409.514341] LDISKFS-fs (dm-0): recovery complete [ 409.515684] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 409.537758] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 410.295522] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 414.188498] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 414.606420] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 414.607982] Lustre: Skipped 7 previous similar messages [ 414.620054] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 414.634394] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:163 to 0x280000401:193) [ 414.634402] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:44 to 0x2c0000401:193) [ 415.162694] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 415.563386] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 419.801748] Lustre: DEBUG MARKER: == replay-single test 2c: setstripe replay =============== 10:35:20 (1713364520) [ 421.683303] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 422.195559] Lustre: Failing over lustre-MDT0000 [ 422.245954] Lustre: server umount lustre-MDT0000 complete [ 424.621669] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 424.625525] Lustre: Skipped 7 previous similar messages [ 435.376324] LDISKFS-fs (dm-0): recovery complete [ 435.377968] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 435.409814] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 436.261601] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 440.511057] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:195 to 0x2c0000401:225) [ 440.511068] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:195 to 0x280000401:225) [ 441.088306] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 441.488151] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 446.058481] Lustre: DEBUG MARKER: == replay-single test 2d: setdirstripe replay ============ 10:35:46 (1713364546) [ 448.016008] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 448.532748] Lustre: Failing over lustre-MDT0000 [ 448.582865] Lustre: server umount lustre-MDT0000 complete [ 461.493215] LDISKFS-fs (dm-0): recovery complete [ 461.494990] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 462.315308] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 464.268862] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 464.272496] Lustre: Skipped 1 previous similar message [ 466.602681] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 466.605619] Lustre: Skipped 1 previous similar message [ 466.620032] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:195 to 0x2c0000401:257) [ 466.620034] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:195 to 0x280000401:257) [ 467.225609] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 467.632691] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 471.980240] Lustre: DEBUG MARKER: == replay-single test 2e: O_CREAT|O_EXCL create replay === 10:36:12 (1713364572) [ 472.249329] Lustre: *** cfs_fail_loc=13b, val=315*** [ 472.250702] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 472.252550] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880086a1ea00 x1796592501842496/t38654705666(0) o35->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:49/0 lens 392/456 e 0 to 0 dl 1713364584 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 475.199563] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 475.707302] Lustre: Failing over lustre-MDT0000 [ 475.757549] Lustre: server umount lustre-MDT0000 complete [ 476.605856] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 476.609569] LustreError: Skipped 75 previous similar messages [ 488.768086] LDISKFS-fs (dm-0): recovery complete [ 488.769454] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 488.797322] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 488.800289] LustreError: Skipped 1 previous similar message [ 488.855070] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 488.857835] Lustre: Skipped 4 previous similar messages [ 489.640665] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 493.871366] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.202.134@tcp (at 0@lo) [ 493.873409] Lustre: Skipped 11 previous similar messages [ 493.884011] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009722c700 x1796592501842496/t38654705666(0) o35->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:71/0 lens 392/456 e 0 to 0 dl 1713364606 ref 1 fl Interpret:/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 493.894616] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:195 to 0x2c0000401:289) [ 493.894905] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:195 to 0x280000401:289) [ 494.544398] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 494.973704] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 499.706840] Lustre: DEBUG MARKER: == replay-single test 3a: replay failed open(O_DIRECTORY) ========================================================== 10:36:40 (1713364600) [ 501.753303] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 502.269682] Lustre: Failing over lustre-MDT0000 [ 502.320359] Lustre: server umount lustre-MDT0000 complete [ 503.886408] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 503.886410] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 503.886415] Lustre: Skipped 8 previous similar messages [ 503.896774] Lustre: Skipped 2 previous similar messages [ 515.414013] LDISKFS-fs (dm-0): recovery complete [ 515.415576] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 516.289729] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 520.536719] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:195 to 0x2c0000401:321) [ 520.537199] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:195 to 0x280000401:321) [ 521.140391] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 521.556775] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 525.883842] Lustre: DEBUG MARKER: == replay-single test 3b: replay failed open -ENOMEM ===== 10:37:06 (1713364626) [ 527.856015] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 528.098957] Lustre: *** cfs_fail_loc=114, val=0*** [ 528.854863] Lustre: Failing over lustre-MDT0000 [ 528.910562] Lustre: server umount lustre-MDT0000 complete [ 542.012022] LDISKFS-fs (dm-0): recovery complete [ 542.013697] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 542.856395] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 544.396684] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 544.399919] Lustre: Skipped 2 previous similar messages [ 547.109974] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 547.113631] Lustre: Skipped 2 previous similar messages [ 547.127540] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:195 to 0x280000401:353) [ 547.127960] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:195 to 0x2c0000401:353) [ 547.715399] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 548.107325] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 552.412385] Lustre: DEBUG MARKER: == replay-single test 3c: replay failed open -ENOMEM ===== 10:37:33 (1713364653) [ 554.365620] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 554.613154] Lustre: *** cfs_fail_loc=128, val=0*** [ 555.358944] Lustre: Failing over lustre-MDT0000 [ 555.409653] Lustre: server umount lustre-MDT0000 complete [ 568.444881] LDISKFS-fs (dm-0): recovery complete [ 568.446543] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 568.476920] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 568.480482] LustreError: Skipped 2 previous similar messages [ 569.310429] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 573.572992] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:195 to 0x280000401:385) [ 573.577707] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:195 to 0x2c0000401:385) [ 574.164088] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 574.558530] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 579.098632] Lustre: DEBUG MARKER: == replay-single test 4a: |x| 10 open(O_CREAT)s ========== 10:37:59 (1713364679) [ 581.216262] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 581.839926] Lustre: Failing over lustre-MDT0000 [ 581.889604] Lustre: server umount lustre-MDT0000 complete [ 595.124124] LDISKFS-fs (dm-0): recovery complete [ 595.125471] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 596.112825] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 600.290974] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:391 to 0x2c0000401:417) [ 600.291393] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:391 to 0x280000401:417) [ 600.949099] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 601.431763] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 606.431402] Lustre: DEBUG MARKER: == replay-single test 4b: |x| rm 10 files ================ 10:38:27 (1713364707) [ 608.455500] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 609.010901] Lustre: Failing over lustre-MDT0000 [ 609.066968] Lustre: server umount lustre-MDT0000 complete [ 622.117222] LDISKFS-fs (dm-0): recovery complete [ 622.118584] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 622.207320] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 622.976354] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 627.214592] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 627.217888] Lustre: Skipped 19 previous similar messages [ 627.245531] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:423 to 0x280000401:449) [ 627.245544] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:423 to 0x2c0000401:449) [ 627.791757] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 628.184940] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 632.416043] Lustre: DEBUG MARKER: == replay-single test 5: |x| 220 open(O_CREAT) =========== 10:38:53 (1713364733) [ 634.330130] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 635.974525] Lustre: Failing over lustre-MDT0000 [ 636.027566] Lustre: server umount lustre-MDT0000 complete [ 649.125401] LDISKFS-fs (dm-0): recovery complete [ 649.127554] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 649.221175] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 650.023923] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 654.837809] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:577) [ 654.837814] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:577) [ 655.470944] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 655.884092] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 666.335906] Lustre: DEBUG MARKER: == replay-single test 6a: mkdir + contained create ======= 10:39:27 (1713364767) [ 668.262528] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 668.761794] Lustre: Failing over lustre-MDT0000 [ 668.809282] Lustre: server umount lustre-MDT0000 complete [ 681.785725] LDISKFS-fs (dm-0): recovery complete [ 681.787116] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 681.868384] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 682.626772] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 684.620549] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 684.623005] Lustre: Skipped 4 previous similar messages [ 686.887988] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 686.894090] Lustre: Skipped 4 previous similar messages [ 686.909174] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:609) [ 686.909176] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:609) [ 687.459038] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 687.841126] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 694.248839] Lustre: DEBUG MARKER: == replay-single test 6b: |X| rmdir ====================== 10:39:55 (1713364795) [ 696.230940] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 696.778597] Lustre: Failing over lustre-MDT0000 [ 696.827340] Lustre: server umount lustre-MDT0000 complete [ 709.718920] LDISKFS-fs (dm-0): recovery complete [ 709.720588] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 709.748994] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 709.751697] LustreError: Skipped 4 previous similar messages [ 709.808594] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 710.545885] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 714.835462] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:641) [ 714.835468] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:641) [ 715.395967] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 715.801300] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 720.109132] Lustre: DEBUG MARKER: == replay-single test 7: mkdir |X| contained create ====== 10:40:20 (1713364820) [ 722.046975] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 722.555031] Lustre: Failing over lustre-MDT0000 [ 722.607542] Lustre: server umount lustre-MDT0000 complete [ 734.700487] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.202.34@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 734.705306] LustreError: Skipped 138 previous similar messages [ 735.457575] LDISKFS-fs (dm-0): recovery complete [ 735.459068] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 735.538129] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 736.254452] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 740.560513] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:673) [ 740.560520] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:673) [ 741.099971] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 741.465556] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 745.583556] Lustre: DEBUG MARKER: == replay-single test 8: creat open |X| close ============ 10:40:46 (1713364846) [ 747.380088] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 747.872644] Lustre: Failing over lustre-MDT0000 [ 747.919842] Lustre: server umount lustre-MDT0000 complete [ 760.836633] LDISKFS-fs (dm-0): recovery complete [ 760.838066] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 760.922034] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 760.925058] Lustre: Skipped 9 previous similar messages [ 760.932919] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 761.662888] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 765.969526] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:705) [ 765.973969] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:705) [ 766.528965] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 766.928734] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 771.217604] Lustre: DEBUG MARKER: == replay-single test 9: |X| create (same inum/gen) ====== 10:41:12 (1713364872) [ 773.145195] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 773.645637] Lustre: Failing over lustre-MDT0000 [ 773.690454] Lustre: server umount lustre-MDT0000 complete [ 775.949683] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 775.954282] Lustre: Skipped 39 previous similar messages [ 786.768969] LDISKFS-fs (dm-0): recovery complete [ 786.770558] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 786.861708] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 787.631600] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 791.892967] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:737) [ 791.892994] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:737) [ 792.476872] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 792.877364] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 797.224898] Lustre: DEBUG MARKER: == replay-single test 10: create |X| rename unlink ======= 10:41:38 (1713364898) [ 799.286711] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 799.848872] Lustre: Failing over lustre-MDT0000 [ 799.898664] Lustre: server umount lustre-MDT0000 complete [ 813.093801] LDISKFS-fs (dm-0): recovery complete [ 813.095395] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 814.013378] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 818.228045] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:560 to 0x280000401:769) [ 818.228047] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:560 to 0x2c0000401:769) [ 818.818564] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 819.213016] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 823.756728] Lustre: DEBUG MARKER: == replay-single test 11: create open write rename |X| create-old-name read ========================================================== 10:42:04 (1713364924) [ 825.779465] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 826.314978] Lustre: Failing over lustre-MDT0000 [ 826.373317] Lustre: server umount lustre-MDT0000 complete [ 839.539210] LDISKFS-fs (dm-0): recovery complete [ 839.541290] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 839.653054] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 839.655892] Lustre: Skipped 1 previous similar message [ 840.491621] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 844.695466] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:771 to 0x2c0000401:801) [ 844.695469] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:771 to 0x280000401:801) [ 845.347480] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 845.747291] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 850.259203] Lustre: DEBUG MARKER: == replay-single test 12: open, unlink |X| close ========= 10:42:31 (1713364951) [ 852.322303] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 852.842636] Lustre: Failing over lustre-MDT0000 [ 852.891530] Lustre: server umount lustre-MDT0000 complete [ 865.949456] LDISKFS-fs (dm-0): recovery complete [ 865.951333] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 866.838119] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 871.076929] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:771 to 0x2c0000401:833) [ 871.079648] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:771 to 0x280000401:833) [ 871.698208] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 872.125412] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 876.645301] Lustre: DEBUG MARKER: == replay-single test 13: open chmod 0 |x| write close === 10:42:57 (1713364977) [ 878.738591] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 879.237656] Lustre: Failing over lustre-MDT0000 [ 879.287126] Lustre: server umount lustre-MDT0000 complete [ 892.239976] LDISKFS-fs (dm-0): recovery complete [ 892.241397] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 893.024923] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 897.326844] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 897.330113] Lustre: Skipped 39 previous similar messages [ 897.346310] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:771 to 0x280000401:865) [ 897.346323] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:835 to 0x2c0000401:865) [ 897.895194] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 898.277274] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 902.841012] Lustre: DEBUG MARKER: == replay-single test 14: open(O_CREAT), unlink |X| close ========================================================== 10:43:23 (1713365003) [ 904.913799] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 905.450258] Lustre: Failing over lustre-MDT0000 [ 905.502480] Lustre: server umount lustre-MDT0000 complete [ 918.335359] LDISKFS-fs (dm-0): recovery complete [ 918.337238] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 918.415147] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 918.417201] Lustre: Skipped 2 previous similar messages [ 919.087690] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 923.444001] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:867 to 0x280000401:897) [ 923.447921] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:835 to 0x2c0000401:897) [ 923.999316] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 924.374911] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 928.529876] Lustre: DEBUG MARKER: == replay-single test 15: open(O_CREAT), unlink |X| touch new, close ========================================================== 10:43:49 (1713365029) [ 930.428256] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 930.926925] Lustre: Failing over lustre-MDT0000 [ 930.976257] Lustre: server umount lustre-MDT0000 complete [ 944.008669] LDISKFS-fs (dm-0): recovery complete [ 944.010275] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 944.865957] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 945.036790] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 945.041391] Lustre: Skipped 9 previous similar messages [ 949.112994] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 949.115808] Lustre: Skipped 9 previous similar messages [ 949.129977] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:899 to 0x280000401:929) [ 949.131349] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:899 to 0x2c0000401:929) [ 949.703152] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 950.096318] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 954.314203] Lustre: DEBUG MARKER: == replay-single test 16: |X| open(O_CREAT), unlink, touch new, unlink new ========================================================== 10:44:15 (1713365055) [ 956.242248] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 956.779035] Lustre: Failing over lustre-MDT0000 [ 956.828718] Lustre: server umount lustre-MDT0000 complete [ 969.715990] LDISKFS-fs (dm-0): recovery complete [ 969.717447] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 969.746209] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 969.749872] LustreError: Skipped 9 previous similar messages [ 970.549255] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 974.836424] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:899 to 0x2c0000401:961) [ 974.836430] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:931 to 0x280000401:961) [ 975.390517] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 975.777096] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 980.257416] Lustre: DEBUG MARKER: == replay-single test 17: |X| open(O_CREAT), |replay| close ========================================================== 10:44:41 (1713365081) [ 982.293054] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 982.814673] Lustre: Failing over lustre-MDT0000 [ 982.859858] Lustre: server umount lustre-MDT0000 complete [ 995.797320] LDISKFS-fs (dm-0): recovery complete [ 995.799157] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 996.647301] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1000.913962] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:931 to 0x280000401:993) [ 1000.913964] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:963 to 0x2c0000401:993) [ 1001.511463] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1001.911632] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1006.116472] Lustre: DEBUG MARKER: == replay-single test 18: open(O_CREAT), unlink, touch new, close, touch, unlink ========================================================== 10:45:06 (1713365106) [ 1007.977821] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1008.498797] Lustre: Failing over lustre-MDT0000 [ 1008.550953] Lustre: server umount lustre-MDT0000 complete [ 1021.455649] LDISKFS-fs (dm-0): recovery complete [ 1021.457330] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1022.264842] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1026.568890] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:995 to 0x280000401:1025) [ 1026.568892] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:995 to 0x2c0000401:1025) [ 1027.117209] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1027.481474] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1031.700086] Lustre: DEBUG MARKER: == replay-single test 19: mcreate, open, write, rename === 10:45:32 (1713365132) [ 1033.554240] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1034.087886] Lustre: Failing over lustre-MDT0000 [ 1034.139219] Lustre: server umount lustre-MDT0000 complete [ 1047.081869] LDISKFS-fs (dm-0): recovery complete [ 1047.083157] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1047.165641] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1047.168033] Lustre: Skipped 4 previous similar messages [ 1047.884475] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1052.199163] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1027 to 0x2c0000401:1057) [ 1052.199165] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1027 to 0x280000401:1057) [ 1052.745610] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1053.119402] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1057.275767] Lustre: DEBUG MARKER: == replay-single test 20a: |X| open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 10:45:58 (1713365158) [ 1059.204235] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1059.723687] Lustre: Failing over lustre-MDT0000 [ 1059.773460] Lustre: server umount lustre-MDT0000 complete [ 1072.724541] LDISKFS-fs (dm-0): recovery complete [ 1072.726034] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1073.644544] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1077.825607] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1059 to 0x2c0000401:1089) [ 1077.825743] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1027 to 0x280000401:1089) [ 1078.374315] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1078.763788] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1083.077968] Lustre: DEBUG MARKER: == replay-single test 20b: write, unlink, eviction, replay (test mds_cleanup_orphans) ========================================================== 10:46:23 (1713365183) [ 1084.061568] Lustre: 2648:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 at adminstrative request [ 1086.037800] Lustre: Failing over lustre-MDT0000 [ 1086.086568] Lustre: server umount lustre-MDT0000 complete [ 1097.959474] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1098.719056] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1103.051656] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1091 to 0x280000401:1121) [ 1103.051661] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1059 to 0x2c0000401:1121) [ 1103.573038] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1103.923264] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1105.608071] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 1107.074790] Lustre: DEBUG MARKER: before 3116, after 3116 [ 1109.857909] Lustre: DEBUG MARKER: == replay-single test 20c: check that client eviction does not affect file content ========================================================== 10:46:50 (1713365210) [ 1110.099573] Lustre: 5156:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 at adminstrative request [ 1114.151085] Lustre: DEBUG MARKER: == replay-single test 21: |X| open(O_CREAT), unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 10:46:54 (1713365214) [ 1116.003844] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1116.506522] Lustre: Failing over lustre-MDT0000 [ 1116.554107] Lustre: server umount lustre-MDT0000 complete [ 1118.061667] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1129.363052] LDISKFS-fs (dm-0): recovery complete [ 1129.364023] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1130.117782] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1134.481199] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1124 to 0x280000401:1153) [ 1134.481210] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1123 to 0x2c0000401:1153) [ 1135.045431] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1135.429561] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1139.737483] Lustre: DEBUG MARKER: == replay-single test 22: open(O_CREAT), |X| unlink, replay, close (test mds_cleanup_orphans) ========================================================== 10:47:20 (1713365240) [ 1141.751343] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1142.273843] Lustre: Failing over lustre-MDT0000 [ 1142.330353] Lustre: server umount lustre-MDT0000 complete [ 1155.384398] LDISKFS-fs (dm-0): recovery complete [ 1155.386801] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1156.254448] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1160.497751] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1155 to 0x280000401:1185) [ 1160.497753] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1155 to 0x2c0000401:1185) [ 1161.043753] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1161.427033] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1165.717111] Lustre: DEBUG MARKER: == replay-single test 23: open(O_CREAT), |X| unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 10:47:46 (1713365266) [ 1167.755896] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1168.287057] Lustre: Failing over lustre-MDT0000 [ 1168.340433] Lustre: server umount lustre-MDT0000 complete [ 1181.320646] LDISKFS-fs (dm-0): recovery complete [ 1181.322418] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1182.142014] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1186.436437] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1187 to 0x2c0000401:1217) [ 1186.441769] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1187 to 0x280000401:1217) [ 1187.008873] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1187.411501] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1191.807426] Lustre: DEBUG MARKER: == replay-single test 24: open(O_CREAT), replay, unlink, close (test mds_cleanup_orphans) ========================================================== 10:48:12 (1713365292) [ 1193.822677] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1194.342050] Lustre: Failing over lustre-MDT0000 [ 1194.403370] Lustre: server umount lustre-MDT0000 complete [ 1207.365384] LDISKFS-fs (dm-0): recovery complete [ 1207.366510] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1208.197518] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1212.479910] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1219 to 0x280000401:1249) [ 1212.479918] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1219 to 0x2c0000401:1249) [ 1213.035054] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1213.401105] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1217.627630] Lustre: DEBUG MARKER: == replay-single test 25: open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 10:48:38 (1713365318) [ 1219.620875] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1220.111653] Lustre: Failing over lustre-MDT0000 [ 1220.164613] Lustre: server umount lustre-MDT0000 complete [ 1233.158059] LDISKFS-fs (dm-0): recovery complete [ 1233.159924] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1234.010141] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1238.273237] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1219 to 0x2c0000401:1281) [ 1238.273239] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1251 to 0x280000401:1281) [ 1238.815081] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1239.185661] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1243.504999] Lustre: DEBUG MARKER: == replay-single test 26: |X| open(O_CREAT), unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 10:49:04 (1713365344) [ 1245.521758] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1246.086744] Lustre: Failing over lustre-MDT0000 [ 1246.144809] Lustre: server umount lustre-MDT0000 complete [ 1248.269810] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1248.275377] LustreError: Skipped 285 previous similar messages [ 1259.168059] LDISKFS-fs (dm-0): recovery complete [ 1259.169588] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1260.016973] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1264.281300] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1283 to 0x280000401:1313) [ 1264.281813] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1283 to 0x2c0000401:1313) [ 1264.830736] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1265.220118] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1269.425247] Lustre: DEBUG MARKER: == replay-single test 27: |X| open(O_CREAT), unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 10:49:30 (1713365370) [ 1271.388266] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1271.939600] Lustre: Failing over lustre-MDT0000 [ 1271.987407] Lustre: server umount lustre-MDT0000 complete [ 1284.914302] LDISKFS-fs (dm-0): recovery complete [ 1284.915719] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1284.990384] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1284.991869] Lustre: Skipped 19 previous similar messages [ 1285.731200] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1290.031157] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1315 to 0x280000401:1345) [ 1290.035710] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1315 to 0x2c0000401:1345) [ 1290.572176] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1290.951579] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1295.111429] Lustre: DEBUG MARKER: == replay-single test 28: open(O_CREAT), |X| unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 10:49:55 (1713365395) [ 1297.009830] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1297.515523] Lustre: Failing over lustre-MDT0000 [ 1297.565127] Lustre: server umount lustre-MDT0000 complete [ 1300.013576] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1300.016768] Lustre: Skipped 79 previous similar messages [ 1310.507099] LDISKFS-fs (dm-0): recovery complete [ 1310.508167] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1310.588814] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1310.590838] Lustre: Skipped 9 previous similar messages [ 1311.316804] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1315.630731] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1347 to 0x2c0000401:1377) [ 1315.630733] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1347 to 0x280000401:1377) [ 1316.183909] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1316.570603] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1320.846993] Lustre: DEBUG MARKER: == replay-single test 29: open(O_CREAT), |X| unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 10:50:21 (1713365421) [ 1322.734040] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1323.243697] Lustre: Failing over lustre-MDT0000 [ 1323.293551] Lustre: server umount lustre-MDT0000 complete [ 1336.162489] LDISKFS-fs (dm-0): recovery complete [ 1336.164085] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1336.982574] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1341.282835] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1379 to 0x280000401:1409) [ 1341.282841] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1379 to 0x2c0000401:1409) [ 1341.824510] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1342.189869] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1346.349544] Lustre: DEBUG MARKER: == replay-single test 30: open(O_CREAT) two, unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 10:50:47 (1713365447) [ 1348.254491] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1348.728486] Lustre: Failing over lustre-MDT0000 [ 1348.775874] Lustre: server umount lustre-MDT0000 complete [ 1361.664672] LDISKFS-fs (dm-0): recovery complete [ 1361.665861] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1362.474036] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1366.768902] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1411 to 0x2c0000401:1441) [ 1366.768934] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1411 to 0x280000401:1441) [ 1367.341046] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1367.763650] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1372.058983] Lustre: DEBUG MARKER: == replay-single test 31: open(O_CREAT) two, unlink one, |X| unlink one, close two (test mds_cleanup_orphans) ========================================================== 10:51:12 (1713365472) [ 1374.321422] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1374.890340] Lustre: Failing over lustre-MDT0000 [ 1374.948885] Lustre: server umount lustre-MDT0000 complete [ 1376.765759] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1388.222463] LDISKFS-fs (dm-0): recovery complete [ 1388.223906] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1389.163983] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1393.344327] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1443 to 0x280000401:1473) [ 1393.344329] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1443 to 0x2c0000401:1473) [ 1393.870802] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1394.247821] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1398.381197] Lustre: DEBUG MARKER: == replay-single test 32: close() notices client eviction; close() after client eviction ========================================================== 10:51:39 (1713365499) [ 1398.646823] Lustre: 2618:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 at adminstrative request [ 1403.950632] Lustre: DEBUG MARKER: == replay-single test 33a: fid seq shouldn't be reused after abort recovery ========================================================== 10:51:44 (1713365504) [ 1405.649480] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1406.188474] Lustre: Failing over lustre-MDT0000 [ 1406.243292] Lustre: server umount lustre-MDT0000 complete [ 1409.430750] LDISKFS-fs (dm-0): recovery complete [ 1409.432614] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1409.510868] Lustre: lustre-MDT0000: Aborting client recovery [ 1409.512950] LustreError: 4741:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1409.515293] Lustre: 4770:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1409.517589] Lustre: 4770:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@ [ 1409.520353] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 1409.522771] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1409.526858] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000401:0x1:0x0] [ 1409.544960] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1443 to 0x2c0000401:1505) [ 1409.545769] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1475 to 0x280000401:1505) [ 1410.337496] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1414.510344] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1414.520987] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to (at 0@lo) [ 1414.525553] Lustre: Skipped 82 previous similar messages [ 1422.267693] Lustre: DEBUG MARKER: == replay-single test 33b: test fid seq allocation ======= 10:52:03 (1713365523) [ 1424.243122] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1424.734400] Lustre: Failing over lustre-MDT0000 [ 1424.799518] Lustre: server umount lustre-MDT0000 complete [ 1428.245154] LDISKFS-fs (dm-0): recovery complete [ 1428.246462] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1428.325754] Lustre: *** cfs_fail_loc=1311, val=0*** [ 1428.330214] Lustre: lustre-MDT0000: Aborting client recovery [ 1428.331619] LustreError: 7529:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1428.333417] Lustre: 7558:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1428.336788] Lustre: 7558:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1428.339683] Lustre: 7558:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 1428.343462] Lustre: 7558:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 1428.346604] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 1428.349295] Lustre: lustre-MDT0000-osd: cancel update llog [0x200015bc0:0x1:0x0] [ 1428.353431] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000403:0x1:0x0] [ 1428.370878] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1475 to 0x280000401:1537) [ 1428.370880] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1443 to 0x2c0000401:1537) [ 1429.127762] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1433.326743] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1438.075425] Lustre: *** cfs_fail_loc=1311, val=0*** [ 1438.077692] Lustre: Skipped 1 previous similar message [ 1441.590776] Lustre: DEBUG MARKER: == replay-single test 34: abort recovery before client does replay (test mds_cleanup_orphans) ========================================================== 10:52:22 (1713365542) [ 1443.742105] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1444.419968] Lustre: Failing over lustre-MDT0000 [ 1444.503380] Lustre: server umount lustre-MDT0000 complete [ 1448.811038] LDISKFS-fs (dm-0): recovery complete [ 1448.813784] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1448.919586] Lustre: lustre-MDT0000: Aborting client recovery [ 1448.920908] LustreError: 10334:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1448.923114] Lustre: 10363:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1448.926028] Lustre: 10363:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1448.928403] Lustre: 10363:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 1448.932761] Lustre: 10363:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 1448.935463] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 1448.938817] Lustre: lustre-MDT0000-osd: cancel update llog [0x200016778:0x1:0x0] [ 1448.943979] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x2400007e8:0x1:0x0] [ 1448.964443] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1544 to 0x2c0000401:1569) [ 1448.964490] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1543 to 0x280000401:1569) [ 1449.970006] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1453.917887] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1461.770164] Lustre: DEBUG MARKER: == replay-single test 35: test recovery from llog for unlink op ========================================================== 10:52:42 (1713365562) [ 1462.020930] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1462.022165] LustreError: 8077:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a0c7a680 x1796592502301248/t201863462916(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:284/0 lens 512/456 e 0 to 0 dl 1713365574 ref 1 fl Interpret:/200/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 1464.475460] Lustre: Failing over lustre-MDT0000 [ 1464.529029] Lustre: server umount lustre-MDT0000 complete [ 1466.670571] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1466.754514] Lustre: lustre-MDT0000: Aborting client recovery [ 1466.756300] LustreError: 12389:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1466.761554] Lustre: 12419:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1466.766257] Lustre: 12419:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1466.770215] Lustre: 12419:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 1466.774000] Lustre: 12419:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 1466.777056] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 1466.779946] Lustre: lustre-MDT0000-osd: cancel update llog [0x200017330:0x1:0x0] [ 1466.785978] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x2400007e9:0x1:0x0] [ 1466.811416] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1571 to 0x280000401:1601) [ 1466.811458] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1544 to 0x2c0000401:1601) [ 1467.488558] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1471.759792] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1479.769585] Lustre: DEBUG MARKER: SKIP: replay-single test_36 skipping ALWAYS excluded test 36 [ 1481.121825] Lustre: DEBUG MARKER: == replay-single test 37: abort recovery before client does replay (test mds_cleanup_orphans for directories) ========================================================== 10:53:01 (1713365581) [ 1482.952028] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1483.631400] Lustre: Failing over lustre-MDT0000 [ 1483.678664] Lustre: server umount lustre-MDT0000 complete [ 1486.665433] LDISKFS-fs (dm-0): recovery complete [ 1486.666616] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1486.690356] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1486.692617] LustreError: Skipped 20 previous similar messages [ 1486.749577] Lustre: lustre-MDT0000: Aborting client recovery [ 1486.750679] LustreError: 15285:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1486.752511] Lustre: 15314:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1486.755299] Lustre: 15314:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1486.757055] Lustre: 15314:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 1486.761793] Lustre: 15314:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 1486.765361] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 1486.768344] Lustre: lustre-MDT0000-osd: cancel update llog [0x200017b00:0x1:0x0] [ 1486.772882] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x2400007ea:0x1:0x0] [ 1486.791261] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:1571 to 0x280000401:1633) [ 1486.791293] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:1544 to 0x2c0000401:1633) [ 1487.507716] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1491.759842] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1501.333824] Lustre: DEBUG MARKER: == replay-single test 38: test recovery from unlink llog (test llog_gen_rec) ========================================================== 10:53:22 (1713365602) [ 1507.603317] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1508.085904] Lustre: Failing over lustre-MDT0000 [ 1508.149201] Lustre: server umount lustre-MDT0000 complete [ 1521.705612] LDISKFS-fs (dm-0): recovery complete [ 1521.707174] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1522.518354] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1525.964603] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1525.967036] Lustre: Skipped 17 previous similar messages [ 1526.813573] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1526.816134] Lustre: Skipped 17 previous similar messages [ 1526.829534] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:2034 to 0x280000401:2049) [ 1526.832483] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2034 to 0x2c0000401:2049) [ 1527.359853] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1527.729807] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1536.598154] Lustre: DEBUG MARKER: == replay-single test 39: test recovery from unlink llog (test llog_gen_rec) ========================================================== 10:53:57 (1713365637) [ 1541.754832] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1544.932649] Lustre: Failing over lustre-MDT0000 [ 1545.026135] Lustre: server umount lustre-MDT0000 complete [ 1558.341256] LDISKFS-fs (dm-0): recovery complete [ 1558.342313] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1559.307617] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1563.977037] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:2450 to 0x280000401:2465) [ 1563.977043] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2450 to 0x2c0000401:2465) [ 1564.676041] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1565.135595] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1575.993564] Lustre: DEBUG MARKER: == replay-single test 41: read from a valid osc while other oscs are invalid ========================================================== 10:54:36 (1713365676) [ 1576.801494] Lustre: setting import lustre-OST0001_UUID INACTIVE by administrator request [ 1577.137362] Lustre: lustre-OST0001: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 1577.142297] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 1581.267669] Lustre: DEBUG MARKER: == replay-single test 42: recovery after ost failure ===== 10:54:41 (1713365681) [ 1588.433618] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 1592.173394] Lustre: Failing over lustre-OST0000 [ 1592.174070] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_destroy to node 0@lo failed: rc = -19 [ 1592.200733] Lustre: server umount lustre-OST0000 complete [ 1605.985896] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 1606.074925] LDISKFS-fs (dm-2): recovery complete [ 1606.076265] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 1606.116573] mount.lustre (25848) used greatest stack depth: 9984 bytes left [ 1607.522548] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1655.402727] Lustre: DEBUG MARKER: == replay-single test 43: mds osc import failure during recovery; don't LBUG ========================================================== 10:55:56 (1713365756) [ 1657.742252] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1658.743355] Lustre: Failing over lustre-MDT0000 [ 1658.800861] Lustre: server umount lustre-MDT0000 complete [ 1659.117562] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1659.120572] LustreError: Skipped 4 previous similar messages [ 1671.795135] LDISKFS-fs (dm-0): recovery complete [ 1671.796264] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1672.614614] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1676.902163] Lustre: *** cfs_fail_loc=204, val=2147483648*** [ 1676.902218] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2866 to 0x2c0000401:2881) [ 1677.418640] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1677.807487] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1692.048302] Lustre: DEBUG MARKER: == replay-single test 44a: race in target handle connect ========================================================== 10:56:32 (1713365792) [ 1692.901297] Lustre: 28837:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713365778/real 1713365778] req@ffff88012499d180 x1796592507918016/t0(0) o5->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 432/432 e 0 to 1 dl 1713365794 ref 2 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'osp-pre-0-0.0' uid:0 gid:0 [ 1692.913258] LustreError: 28837:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -11 [ 1692.913784] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 1693.643694] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1693.920705] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:2867 to 0x280000401:2913) [ 1698.647243] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1698.649673] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1699.105712] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1704.108324] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1704.111245] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1704.633318] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1709.637440] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1709.643791] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1710.261221] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1715.263384] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1715.818728] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1720.821359] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1720.826752] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1720.832021] Lustre: Skipped 1 previous similar message [ 1727.029561] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1727.031757] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 1 previous similar message [ 1732.033320] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1732.038919] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 1 previous similar message [ 1737.738366] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1737.740374] Lustre: Skipped 2 previous similar messages [ 1743.640871] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_race id 701 sleeping [ 1743.642609] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 2 previous similar messages [ 1748.644320] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) cfs_fail_race id 701 awake: rc=0 [ 1748.647387] LustreError: 8077:0:(ldlm_lib.c:1106:target_handle_connect()) Skipped 2 previous similar messages [ 1752.110309] Lustre: DEBUG MARKER: == replay-single test 44b: race in target handle connect ========================================================== 10:57:32 (1713365852) [ 1752.566161] LustreError: 8077:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1762.567686] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1767.580743] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1772.589312] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1777.596646] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1782.604583] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1792.567282] LustreError: 8077:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1792.570414] Lustre: 8077:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff88009abd8700 x1796592503616512/t0(0) o38->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:0/0 lens 520/416 e 0 to 0 dl 1713365874 ref 1 fl Complete:H/200/0 rc 0/0 job:'lctl.0' uid:0 gid:0 [ 1792.620611] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1792.624235] Lustre: Skipped 3 previous similar messages [ 1793.087394] LustreError: 6931:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1803.088158] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1803.093854] Lustre: Skipped 1 previous similar message [ 1823.116564] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1823.119404] Lustre: Skipped 4 previous similar messages [ 1833.089281] LustreError: 6931:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1833.092214] Lustre: 6931:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff88009cada300 x1796592503618944/t0(0) o38->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:0/0 lens 520/416 e 0 to 0 dl 1713365914 ref 1 fl Complete:H/200/0 rc 0/0 job:'lctl.0' uid:0 gid:0 [ 1833.132447] LustreError: 7716:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1858.133008] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1858.135634] Lustre: Skipped 1 previous similar message [ 1873.135323] LustreError: 7716:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1873.139111] Lustre: 7716:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff88012c0bf480 x1796592503620480/t0(0) o38->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:0/0 lens 520/416 e 0 to 0 dl 1713365954 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1873.164955] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 1873.168658] Lustre: Skipped 2 previous similar messages [ 1873.170785] LustreError: 7716:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1913.174468] LustreError: 7716:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1913.179755] Lustre: 7716:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff88012b66f480 x1796592503622016/t0(0) o38->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:0/0 lens 520/416 e 0 to 0 dl 1713365994 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1913.196773] LustreError: 7716:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1938.197502] Lustre: lustre-MDT0000: Export ffff88009e8b9000 already connecting from 192.168.202.34@tcp [ 1938.203059] Lustre: Skipped 5 previous similar messages [ 1953.207366] LustreError: 7716:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 awake [ 1953.211938] Lustre: 7716:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/20s); client may timeout req@ffff880124839500 x1796592503623552/t0(0) o38->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:0/0 lens 520/416 e 0 to 0 dl 1713366034 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1953.229001] LustreError: 6930:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout id 704 sleeping for 40000ms [ 1978.133333] LustreError: 6930:0:(ldlm_lib.c:1359:target_handle_connect()) cfs_fail_timeout interrupted [ 1978.138148] Lustre: 6930:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20/5s); client may timeout req@ffff880086ad7480 x1796592503625088/t0(0) o38->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:0/0 lens 520/416 e 0 to 0 dl 1713366074 ref 1 fl Complete:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 1981.359636] Lustre: DEBUG MARKER: == replay-single test 44c: race in target handle connect ========================================================== 11:01:22 (1713366082) [ 1983.954570] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 1984.885531] Lustre: Failing over lustre-MDT0000 [ 1984.941283] Lustre: server umount lustre-MDT0000 complete [ 1987.389462] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1987.391355] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1987.394081] Lustre: Skipped 48 previous similar messages [ 1987.395252] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1987.399722] LustreError: Skipped 146 previous similar messages [ 1988.138860] LDISKFS-fs (dm-0): recovery complete [ 1988.140065] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1988.212169] Lustre: *** cfs_fail_loc=712, val=0*** [ 1988.213804] LustreError: 9200:0:(service.c:1236:ptlrpc_check_req()) @@@ Invalid replay without recovery req@ffff88009dead500 x1796592507991872/t0(0) o400->lustre-MDT0000-mdtlov_UUID@0@lo:0/0 lens 224/0 e 0 to 0 dl 0 ref 1 fl New:/2c0/ffffffff rc 0/-1 job:'ptlrpcd_rcv.0' uid:0 gid:0 [ 1988.221494] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1988.223182] Lustre: Skipped 13 previous similar messages [ 1988.228935] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1988.229028] Lustre: lustre-MDT0000: Aborting client recovery [ 1988.229031] LustreError: 3636:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1988.234329] Lustre: Skipped 22 previous similar messages [ 1988.235265] Lustre: 3665:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1988.237115] Lustre: 3665:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1988.239374] Lustre: 3665:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 1988.242148] Lustre: 3665:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 1988.244617] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 1988.246735] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000182d0:0x1:0x0] [ 1988.250680] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x2400007eb:0x1:0x0] [ 1988.274537] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:2867 to 0x280000401:2945) [ 1988.274538] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2866 to 0x2c0000401:2913) [ 1989.029966] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 1993.229795] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1998.799655] Lustre: Failing over lustre-MDT0000 [ 1998.877991] Lustre: server umount lustre-MDT0000 complete [ 2011.691576] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2012.487565] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2016.782759] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 2016.784598] Lustre: Skipped 36 previous similar messages [ 2016.804676] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:2867 to 0x280000401:2977) [ 2016.804681] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2866 to 0x2c0000401:2945) [ 2017.385252] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2017.746152] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2021.840886] Lustre: DEBUG MARKER: == replay-single test 45: Handle failed close ============ 11:02:02 (1713366122) [ 2021.863370] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 2021.866141] Lustre: Skipped 2 previous similar messages [ 2025.857820] Lustre: DEBUG MARKER: == replay-single test 46: Don't leak file handle after open resend (3325) ========================================================== 11:02:06 (1713366126) [ 2026.106201] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 2026.107607] LustreError: 6939:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009f6ef100 x1796592503652416/t0(0) o700->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:93/0 lens 264/248 e 0 to 0 dl 1713366138 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 2042.957022] Lustre: Failing over lustre-MDT0000 [ 2043.017913] Lustre: server umount lustre-MDT0000 complete [ 2055.051367] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2055.882221] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2060.157557] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2947 to 0x2c0000401:2977) [ 2060.160785] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:2979 to 0x280000401:3009) [ 2060.692957] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2061.064102] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2065.892911] Lustre: DEBUG MARKER: == replay-single test 47: MDS->OSC failure during precreate cleanup (2824) ========================================================== 11:02:46 (1713366166) [ 2066.494289] Lustre: Failing over lustre-OST0000 [ 2066.519529] Lustre: server umount lustre-OST0000 complete [ 2068.813597] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 2078.805491] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2078.808138] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2080.274618] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2082.790226] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2083.321316] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2149.237486] Lustre: DEBUG MARKER: == replay-single test 48: MDS->OSC failure during precreate cleanup (2824) ========================================================== 11:04:10 (1713366250) [ 2151.162343] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2151.702632] Lustre: Failing over lustre-MDT0000 [ 2151.750880] Lustre: server umount lustre-MDT0000 complete [ 2164.582027] LDISKFS-fs (dm-0): recovery complete [ 2164.583543] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2164.612238] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2164.615653] LustreError: Skipped 6 previous similar messages [ 2165.373431] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2169.324599] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 2169.328621] Lustre: Skipped 6 previous similar messages [ 2169.734467] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 2169.737318] Lustre: Skipped 6 previous similar messages [ 2169.751387] Lustre: *** cfs_fail_loc=216, val=0*** [ 2169.751395] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3030 to 0x280000401:3073) [ 2169.754904] LustreError: 12376:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0001-osc-MDT0000: cannot cleanup orphans: rc = -30 [ 2170.758496] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:2998 to 0x2c0000401:3041) [ 2232.899021] Lustre: DEBUG MARKER: == replay-single test 50: Double OSC recovery, don't LASSERT (3812) ========================================================== 11:05:33 (1713366333) [ 2241.556070] Lustre: DEBUG MARKER: == replay-single test 52: time out lock replay (3764) ==== 11:05:42 (1713366342) [ 2242.272641] Lustre: Failing over lustre-MDT0000 [ 2242.321360] Lustre: server umount lustre-MDT0000 complete [ 2254.234202] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2254.988398] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2259.329238] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 2259.331580] LustreError: 15138:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009ea63100 x1796592503710848/t0(0) o101->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:366/0 lens 328/344 e 0 to 0 dl 1713366411 ref 1 fl Complete:/240/0 rc 0/0 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 2314.467506] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:54 [ 2314.485272] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3085 to 0x280000401:3105) [ 2314.485274] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3052 to 0x2c0000401:3073) [ 2315.038860] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2315.384497] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2319.882543] Lustre: DEBUG MARKER: == replay-single test 53a: |X| close request while two MDC requests in flight ========================================================== 11:07:00 (1713366420) [ 2321.158486] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 2323.267780] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2323.768767] Lustre: Failing over lustre-MDT0000 [ 2323.819523] Lustre: server umount lustre-MDT0000 complete [ 2337.474797] LDISKFS-fs (dm-0): recovery complete [ 2337.477013] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2338.314973] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2342.592404] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3085 to 0x280000401:3137) [ 2342.592410] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3075 to 0x2c0000401:3105) [ 2343.147084] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2343.507036] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2347.919118] Lustre: DEBUG MARKER: == replay-single test 53b: |X| open request while two MDC requests in flight ========================================================== 11:07:28 (1713366448) [ 2348.196342] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2351.247855] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2351.743051] Lustre: Failing over lustre-MDT0000 [ 2351.797075] Lustre: server umount lustre-MDT0000 complete [ 2364.712705] LDISKFS-fs (dm-0): recovery complete [ 2364.713863] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2365.532225] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2369.834021] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3139 to 0x280000401:3169) [ 2369.834033] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3075 to 0x2c0000401:3137) [ 2370.548085] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2370.952232] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2375.344733] Lustre: DEBUG MARKER: == replay-single test 53c: |X| open request and close request while two MDC requests in flight ========================================================== 11:07:56 (1713366476) [ 2375.608451] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2379.465581] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2380.025793] Lustre: Failing over lustre-MDT0000 [ 2380.096326] Lustre: server umount lustre-MDT0000 complete [ 2393.480907] LDISKFS-fs (dm-0): recovery complete [ 2393.482061] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2394.244037] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2398.588411] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3075 to 0x2c0000401:3169) [ 2398.588416] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3171 to 0x280000401:3201) [ 2403.771664] Lustre: DEBUG MARKER: == replay-single test 53d: close reply while two MDC requests in flight ========================================================== 11:08:24 (1713366504) [ 2405.038799] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2405.040130] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 2405.041466] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012e19cc50 x1796592503735808/t257698037777(0) o35->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:472/0 lens 392/456 e 0 to 0 dl 1713366517 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2405.741717] Lustre: Failing over lustre-MDT0000 [ 2405.804485] Lustre: server umount lustre-MDT0000 complete [ 2408.589644] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2417.773489] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2418.784603] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2422.870502] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012c7d3480 x1796592503735808/t257698037777(0) o35->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:490/0 lens 392/456 e 0 to 0 dl 1713366535 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2422.881632] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3203 to 0x280000401:3233) [ 2422.881634] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3075 to 0x2c0000401:3201) [ 2423.412526] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2423.781846] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2428.304794] Lustre: DEBUG MARKER: == replay-single test 53e: |X| open reply while two MDC requests in flight ========================================================== 11:08:49 (1713366529) [ 2428.591290] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2428.592773] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012b524a80 x1796592503741952/t261993005072(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:540/0 lens 504/448 e 0 to 0 dl 1713366585 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2432.167372] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2432.713684] Lustre: Failing over lustre-MDT0000 [ 2432.776816] Lustre: server umount lustre-MDT0000 complete [ 2445.764193] LDISKFS-fs (dm-0): recovery complete [ 2445.765465] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2446.550572] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2450.853061] Lustre: 8077:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012b66fb80 x1796592503741952/t261993005072(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:562/0 lens 504/2880 e 0 to 0 dl 1713366607 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2450.863349] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3203 to 0x2c0000401:3233) [ 2450.863357] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3203 to 0x280000401:3265) [ 2451.402682] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2451.751964] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2455.988740] Lustre: DEBUG MARKER: == replay-single test 53f: |X| open reply and close reply while two MDC requests in flight ========================================================== 11:09:16 (1713366556) [ 2456.257012] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2456.258665] LustreError: 7716:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012bf7dc00 x1796592503748864/t266287972368(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:567/0 lens 504/448 e 0 to 0 dl 1713366612 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2457.477845] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2459.333586] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2459.830585] Lustre: Failing over lustre-MDT0000 [ 2459.885653] Lustre: server umount lustre-MDT0000 complete [ 2460.861595] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2472.727984] LDISKFS-fs (dm-0): recovery complete [ 2472.729205] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2473.510883] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2477.812412] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009dcd8380 x1796592503748992/t266287972369(0) o35->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:545/0 lens 392/456 e 0 to 0 dl 1713366590 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2477.819124] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 2477.822591] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3235 to 0x2c0000401:3265) [ 2477.822593] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3203 to 0x280000401:3297) [ 2483.026013] Lustre: DEBUG MARKER: == replay-single test 53g: |X| drop open reply and close request while close and open are both in flight ========================================================== 11:09:43 (1713366583) [ 2483.274685] LustreError: 10476:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009ea61c00 x1796592503754944/t270582939664(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:594/0 lens 504/448 e 0 to 0 dl 1713366639 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2483.280520] LustreError: 10476:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 2484.495372] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 2484.496802] Lustre: Skipped 1 previous similar message [ 2486.673147] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2487.157694] Lustre: Failing over lustre-MDT0000 [ 2487.209048] Lustre: server umount lustre-MDT0000 complete [ 2500.077818] LDISKFS-fs (dm-0): recovery complete [ 2500.080298] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2500.874672] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2505.206573] Lustre: 10476:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a0c78a80 x1796592503754944/t270582939664(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:616/0 lens 504/2880 e 0 to 0 dl 1713366661 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 2505.219814] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3203 to 0x280000401:3329) [ 2505.219816] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3267 to 0x2c0000401:3297) [ 2510.211049] Lustre: DEBUG MARKER: == replay-single test 53h: open request and close reply while two MDC requests in flight ========================================================== 11:10:11 (1713366611) [ 2510.470063] Lustre: *** cfs_fail_loc=107, val=2147483648*** [ 2511.692283] Lustre: *** cfs_fail_loc=13b, val=315*** [ 2511.694434] Lustre: *** cfs_fail_loc=13b, val=2147483648*** [ 2511.695811] Lustre: Skipped 2 previous similar messages [ 2514.514638] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2514.994543] Lustre: Failing over lustre-MDT0000 [ 2515.040661] Lustre: server umount lustre-MDT0000 complete [ 2527.886354] LDISKFS-fs (dm-0): recovery complete [ 2527.887665] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2528.687109] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2532.982370] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009deac700 x1796592503761088/t274877906960(0) o35->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:600/0 lens 392/456 e 0 to 0 dl 1713366645 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 2532.991519] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3203 to 0x280000401:3361) [ 2532.991990] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3299 to 0x2c0000401:3329) [ 2538.145006] Lustre: DEBUG MARKER: == replay-single test 55: let MDS_CHECK_RESENT return the original return code instead of 0 ========================================================== 11:10:38 (1713366638) [ 2538.401637] Lustre: *** cfs_fail_loc=12b, val=2147483991*** [ 2538.403085] LustreError: 7716:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009ecc0000 x1796592503766080/t279172874255(0) o101->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:650/0 lens 664/608 e 0 to 0 dl 1713366695 ref 1 fl Interpret:/200/0 rc 301/0 job:'touch.0' uid:0 gid:0 [ 2538.409268] LustreError: 7716:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 2598.410732] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnecting [ 2598.413153] Lustre: Skipped 4 previous similar messages [ 2598.415315] Lustre: 8077:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800962c9180 x1796592503766080/t279172874255(0) o101->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:710/0 lens 664/3488 e 0 to 0 dl 1713366755 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 2601.421087] Lustre: DEBUG MARKER: == replay-single test 56: don't replay a symlink open request (3440) ========================================================== 11:11:42 (1713366702) [ 2603.311668] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2603.798845] Lustre: Failing over lustre-MDT0000 [ 2603.854943] Lustre: server umount lustre-MDT0000 complete [ 2608.093623] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2608.093624] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2608.093629] Lustre: Skipped 55 previous similar messages [ 2608.093759] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2608.093760] LustreError: Skipped 161 previous similar messages [ 2608.104970] Lustre: Skipped 2 previous similar messages [ 2616.751942] LDISKFS-fs (dm-0): recovery complete [ 2616.754411] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2616.833051] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2616.834706] Lustre: Skipped 13 previous similar messages [ 2616.841849] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 2616.843980] Lustre: Skipped 15 previous similar messages [ 2617.562398] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2621.838850] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.202.134@tcp (at 0@lo) [ 2621.842589] Lustre: Skipped 51 previous similar messages [ 2621.860235] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3331 to 0x2c0000401:3361) [ 2621.860237] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3203 to 0x280000401:3393) [ 2622.375945] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2622.750116] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2636.892278] Lustre: DEBUG MARKER: == replay-single test 57: test recovery from llog for setattr op ========================================================== 11:12:17 (1713366737) [ 2638.955165] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2639.416579] Lustre: Failing over lustre-MDT0000 [ 2639.461318] Lustre: server umount lustre-MDT0000 complete [ 2652.315385] LDISKFS-fs (dm-0): recovery complete [ 2652.316744] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2653.114741] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2657.430452] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:3395 to 0x280000401:3425) [ 2657.430467] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:3331 to 0x2c0000401:3393) [ 2657.953078] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2658.318712] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2660.024920] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 2665.739673] Lustre: DEBUG MARKER: == replay-single test 58a: test recovery from llog for setattr op (test llog_gen_rec) ========================================================== 11:12:46 (1713366766) [ 2673.077818] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2673.561844] Lustre: Failing over lustre-MDT0000 [ 2673.642095] Lustre: server umount lustre-MDT0000 complete [ 2686.511751] LDISKFS-fs (dm-0): recovery complete [ 2686.513361] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2687.293243] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2691.657884] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4676 to 0x280000401:4705) [ 2691.661381] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4644 to 0x2c0000401:4673) [ 2692.163705] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2692.519836] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2709.591748] Lustre: DEBUG MARKER: == replay-single test 58b: test replay of setxattr op ==== 11:13:30 (1713366810) [ 2712.625244] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2713.192601] Lustre: Failing over lustre-MDT0000 [ 2713.257269] Lustre: server umount lustre-MDT0000 complete [ 2727.333121] LDISKFS-fs (dm-0): recovery complete [ 2727.335520] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2728.536869] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2732.499005] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4644 to 0x2c0000401:4705) [ 2732.499017] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4707 to 0x280000401:4737) [ 2733.171310] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2733.534386] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2735.512631] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount FULL mgc.*.mgs_server_uuid [ 2735.907265] Lustre: DEBUG MARKER: mgc.*.mgs_server_uuid in FULL state after 0 sec [ 2739.234910] Lustre: DEBUG MARKER: == replay-single test 58c: resend/reconstruct setxattr op ========================================================== 11:14:00 (1713366840) [ 2745.082959] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 2805.589631] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 2805.590719] Lustre: Skipped 1 previous similar message [ 2805.591742] LustreError: 10476:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88006f104700 x1796592505244416/t296352743433(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:162/0 lens 66040/440 e 0 to 0 dl 1713366962 ref 1 fl Interpret:/200/0 rc 0/0 job:'setfattr.0' uid:0 gid:0 [ 2865.589239] Lustre: 6932:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880076c93450 x1796592505244416/t296352743433(0) o36->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:222/0 lens 66040/440 e 0 to 0 dl 1713367022 ref 1 fl Interpret:/202/0 rc 0/0 job:'setfattr.0' uid:0 gid:0 [ 2868.035860] Lustre: DEBUG MARKER: SKIP: replay-single test_59 skipping ALWAYS excluded test 59 [ 2869.396304] Lustre: DEBUG MARKER: == replay-single test 60: test llog post recovery init vs llog unlink ========================================================== 11:16:10 (1713366970) [ 2872.503126] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 2873.118437] Lustre: Failing over lustre-MDT0000 [ 2873.173496] Lustre: server umount lustre-MDT0000 complete [ 2886.036491] LDISKFS-fs (dm-0): recovery complete [ 2886.038382] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2886.066315] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2886.068619] LustreError: Skipped 13 previous similar messages [ 2886.802859] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2890.636657] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 2890.638694] Lustre: Skipped 13 previous similar messages [ 2891.260340] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 2891.263401] Lustre: Skipped 13 previous similar messages [ 2891.279153] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4807 to 0x2c0000401:4833) [ 2891.282339] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4838 to 0x280000401:4865) [ 2891.777476] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2892.137376] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2896.548505] Lustre: DEBUG MARKER: == replay-single test 61a: test race llog recovery vs llog cleanup ========================================================== 11:16:37 (1713366997) [ 2900.843120] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 2903.302408] Lustre: Failing over lustre-OST0000 [ 2903.317541] Lustre: server umount lustre-OST0000 complete [ 2916.118673] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2916.151278] LDISKFS-fs (dm-2): recovery complete [ 2916.153425] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2917.327273] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2928.275261] Lustre: Failing over lustre-OST0000 [ 2928.293377] Lustre: server umount lustre-OST0000 complete [ 2940.213729] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2940.216376] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2941.520289] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2943.756176] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2944.127321] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2978.674761] Lustre: DEBUG MARKER: == replay-single test 61b: test race mds llog sync vs llog cleanup ========================================================== 11:17:59 (1713367079) [ 2979.393579] Lustre: Failing over lustre-MDT0000 [ 2979.446011] Lustre: server umount lustre-MDT0000 complete [ 2991.419438] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2992.239233] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 2996.541532] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4807 to 0x2c0000401:4865) [ 2996.541535] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4838 to 0x280000401:4897) [ 3003.226030] Lustre: Failing over lustre-MDT0000 [ 3003.287617] Lustre: server umount lustre-MDT0000 complete [ 3006.541728] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3006.543737] LustreError: Skipped 1 previous similar message [ 3015.226466] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3016.034902] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3020.356463] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4807 to 0x2c0000401:4897) [ 3020.356471] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4838 to 0x280000401:4929) [ 3020.889467] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3021.258464] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3025.390451] Lustre: DEBUG MARKER: == replay-single test 61c: test race mds llog sync vs llog cleanup ========================================================== 11:18:46 (1713367126) [ 3036.234103] Lustre: Failing over lustre-OST0000 [ 3036.250099] Lustre: server umount lustre-OST0000 complete [ 3048.225538] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3048.230389] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3049.354035] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3051.481412] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3051.869869] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3056.494521] Lustre: DEBUG MARKER: == replay-single test 61d: error in llog_setup should cleanup the llog context correctly ========================================================== 11:19:17 (1713367157) [ 3056.958714] Lustre: Failing over lustre-MDT0000 [ 3057.008966] Lustre: server umount lustre-MDT0000 complete [ 3059.792097] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3059.835557] Lustre: *** cfs_fail_loc=605, val=0*** [ 3059.837257] LustreError: 32187:0:(llog_obd.c:207:llog_setup()) MGS: ctxt 0 lop_setup=ffffffffa0549b10 failed: rc = -95 [ 3059.840930] LustreError: 32187:0:(obd_config.c:797:class_setup()) setup MGS failed (-95) [ 3059.843595] LustreError: 32187:0:(obd_mount.c:215:lustre_start_simple()) MGS setup error -95 [ 3059.846475] LustreError: 32187:0:(tgt_mount.c:135:server_deregister_mount()) MGS not registered [ 3059.849395] LustreError: 15e-a: Failed to start MGS 'MGS' (-95). Is the 'mgs' module loaded? [ 3059.852224] LustreError: 32187:0:(tgt_mount.c:1755:server_put_super()) no obd lustre-MDT0000 [ 3059.856594] Lustre: server umount lustre-MDT0000 complete [ 3059.858781] LustreError: 32187:0:(super25.c:189:lustre_fill_super()) llite: Unable to mount : rc = -95 [ 3062.633016] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3063.802555] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3067.783070] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4931 to 0x280000401:4961) [ 3067.783087] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4899 to 0x2c0000401:4929) [ 3068.277023] Lustre: DEBUG MARKER: == replay-single test 62: don't mis-drop resent replay === 11:19:29 (1713367169) [ 3070.282813] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3071.062424] Lustre: Failing over lustre-MDT0000 [ 3071.124392] Lustre: server umount lustre-MDT0000 complete [ 3084.123928] LDISKFS-fs (dm-0): recovery complete [ 3084.125212] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3084.896482] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3085.949701] Lustre: *** cfs_fail_loc=707, val=0*** [ 3145.984337] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:54 [ 3146.072232] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:4943 to 0x2c0000401:4961) [ 3146.072240] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:4974 to 0x280000401:4993) [ 3146.564036] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3146.896470] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3151.955890] Lustre: DEBUG MARKER: == replay-single test 65a: AT: verify early replies ====== 11:20:52 (1713367252) [ 3174.812927] LustreError: 29651:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 11000ms [ 3185.815359] LustreError: 29651:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 3198.148459] Lustre: DEBUG MARKER: == replay-single test 65b: AT: verify early replies on packed reply / bulk ========================================================== 11:21:38 (1713367298) [ 3220.685068] LustreError: 9205:0:(tgt_handler.c:2759:tgt_brw_write()) cfs_fail_timeout id 224 sleeping for 11000ms [ 3231.687320] LustreError: 9205:0:(tgt_handler.c:2759:tgt_brw_write()) cfs_fail_timeout id 224 awake [ 3235.412340] Lustre: DEBUG MARKER: == replay-single test 66a: AT: verify MDT service time adjusts with no early replies ========================================================== 11:22:16 (1713367336) [ 3257.779548] LustreError: 6930:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 5000ms [ 3262.784273] LustreError: 6930:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 3273.680277] LustreError: 6930:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 3286.513170] Lustre: DEBUG MARKER: == replay-single test 66b: AT: verify net latency adjusts ========================================================== 11:23:07 (1713367387) [ 3371.350231] Lustre: DEBUG MARKER: == replay-single test 67a: AT: verify slow request processing doesn't induce reconnects ========================================================== 11:24:32 (1713367472) [ 3393.647500] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 400ms [ 3393.654394] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 1 previous similar message [ 3394.060363] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 3410.061417] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 400ms [ 3410.063774] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 50 previous similar messages [ 3410.465298] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 3410.469735] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 50 previous similar messages [ 3442.162917] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a sleeping for 400ms [ 3442.167560] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 116 previous similar messages [ 3442.571395] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) cfs_fail_timeout id 50a awake [ 3442.575107] LustreError: 10476:0:(service.c:2338:ptlrpc_server_handle_request()) Skipped 116 previous similar messages [ 3447.955738] Lustre: DEBUG MARKER: == replay-single test 67b: AT: verify instant slowdown doesn't induce reconnects ========================================================== 11:25:48 (1713367548) [ 3472.833523] Lustre: DEBUG MARKER: phase 2 [ 3478.545168] Lustre: DEBUG MARKER: == replay-single test 68: AT: verify slowing locks ======= 11:26:19 (1713367579) [ 3550.238152] Lustre: DEBUG MARKER: == replay-single test 70a: check multi client t-f ======== 11:27:30 (1713367650) [ 3550.785354] Lustre: DEBUG MARKER: SKIP: replay-single test_70a Need two or more clients, have 1 [ 3553.339968] Lustre: DEBUG MARKER: == replay-single test 70b: dbench 2mdts recovery; 1 clients ========================================================== 11:27:34 (1713367654) [ 3555.015113] Lustre: DEBUG MARKER: Started rundbench load pid=21611 ... [ 3558.774419] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3560.362684] Lustre: DEBUG MARKER: test_70b fail mds1 1 times [ 3561.081493] Lustre: Failing over lustre-MDT0000 [ 3561.164189] Lustre: server umount lustre-MDT0000 complete [ 3561.181395] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3561.185197] LustreError: Skipped 1 previous similar message [ 3561.187424] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3561.194031] Lustre: Skipped 38 previous similar messages [ 3561.197797] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3561.205131] LustreError: Skipped 128 previous similar messages [ 3575.220717] LDISKFS-fs (dm-0): recovery complete [ 3575.222683] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3575.258638] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3575.261202] LustreError: Skipped 4 previous similar messages [ 3575.317061] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3575.318669] Lustre: Skipped 11 previous similar messages [ 3575.324745] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3575.326345] Lustre: Skipped 11 previous similar messages [ 3576.371777] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3576.732880] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 3576.737124] Lustre: Skipped 7 previous similar messages [ 3580.335210] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to (at 0@lo) [ 3580.338732] Lustre: Skipped 41 previous similar messages [ 3580.348081] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 3580.354495] Lustre: Skipped 7 previous similar messages [ 3580.384656] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:5047 to 0x280000401:5089) [ 3580.384676] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:5010 to 0x2c0000401:5025) [ 3581.156928] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3581.742472] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3586.995540] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 3588.587916] Lustre: DEBUG MARKER: test_70b fail mds2 2 times [ 3589.291057] Lustre: Failing over lustre-MDT0001 [ 3589.302151] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 3589.369932] Lustre: server umount lustre-MDT0001 complete [ 3603.427604] LDISKFS-fs (dm-1): recovery complete [ 3603.428777] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3603.531140] mount.lustre (18367) used greatest stack depth: 9896 bytes left [ 3604.575818] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3608.951746] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:570 to 0x280000400:609) [ 3608.952346] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:553 to 0x2c0000400:577) [ 3609.736253] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3610.293913] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3615.032932] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3616.439579] Lustre: DEBUG MARKER: test_70b fail mds1 3 times [ 3616.909459] Lustre: Failing over lustre-MDT0000 [ 3616.951822] Lustre: server umount lustre-MDT0000 complete [ 3630.816605] LDISKFS-fs (dm-0): recovery complete [ 3630.819390] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3632.071253] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3635.987095] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:5010 to 0x2c0000401:5057) [ 3635.987405] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:5047 to 0x280000401:5121) [ 3636.852912] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3637.466220] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3642.639019] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 3644.246513] Lustre: DEBUG MARKER: test_70b fail mds2 4 times [ 3644.946934] Lustre: Failing over lustre-MDT0001 [ 3644.954329] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 3645.035724] Lustre: server umount lustre-MDT0001 complete [ 3659.064217] LDISKFS-fs (dm-1): recovery complete [ 3659.065806] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3659.162362] mount.lustre (22482) used greatest stack depth: 9840 bytes left [ 3660.167910] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3664.492114] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:880 to 0x280000400:897) [ 3664.495518] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:849 to 0x2c0000400:865) [ 3665.325673] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3665.860011] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3671.078981] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3672.658094] Lustre: DEBUG MARKER: test_70b fail mds1 5 times [ 3673.299497] Lustre: Failing over lustre-MDT0000 [ 3673.369595] Lustre: server umount lustre-MDT0000 complete [ 3687.429561] LDISKFS-fs (dm-0): recovery complete [ 3687.432568] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3688.674535] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3692.598149] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:5010 to 0x2c0000401:5089) [ 3692.598210] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:5047 to 0x280000401:5153) [ 3693.436899] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3693.999276] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3699.153772] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 3700.735272] Lustre: DEBUG MARKER: test_70b fail mds2 6 times [ 3701.444704] Lustre: Failing over lustre-MDT0001 [ 3701.518726] Lustre: server umount lustre-MDT0001 complete [ 3715.710208] LDISKFS-fs (dm-1): recovery complete [ 3715.713426] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3716.873302] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3721.233630] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:1136 to 0x2c0000400:1153) [ 3721.233709] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:1169 to 0x280000400:1185) [ 3722.102906] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3722.713723] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3727.968527] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3729.534267] Lustre: DEBUG MARKER: test_70b fail mds1 7 times [ 3730.243152] Lustre: Failing over lustre-MDT0000 [ 3730.308740] Lustre: server umount lustre-MDT0000 complete [ 3744.726501] LDISKFS-fs (dm-0): recovery complete [ 3744.729427] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3746.021466] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3749.918235] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:5010 to 0x2c0000401:5121) [ 3749.918417] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:5047 to 0x280000401:5185) [ 3750.758432] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3751.304547] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3756.270305] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 3757.728082] Lustre: DEBUG MARKER: test_70b fail mds2 8 times [ 3758.265539] Lustre: Failing over lustre-MDT0001 [ 3758.270967] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 3758.321204] Lustre: server umount lustre-MDT0001 complete [ 3772.248075] LDISKFS-fs (dm-1): recovery complete [ 3772.250545] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3773.439336] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3777.917932] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:1457 to 0x280000400:1473) [ 3777.917947] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:1425 to 0x2c0000400:1441) [ 3778.806044] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3779.365722] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3784.335680] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3785.958227] Lustre: DEBUG MARKER: test_70b fail mds1 9 times [ 3786.631559] Lustre: Failing over lustre-MDT0000 [ 3786.713058] Lustre: server umount lustre-MDT0000 complete [ 3800.585899] LDISKFS-fs (dm-0): recovery complete [ 3800.588814] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3801.802883] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3805.757690] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:5047 to 0x280000401:5217) [ 3805.757700] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:5010 to 0x2c0000401:5153) [ 3806.634404] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3807.253025] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3812.461894] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 3814.047650] Lustre: DEBUG MARKER: test_70b fail mds2 10 times [ 3814.758883] Lustre: Failing over lustre-MDT0001 [ 3814.763107] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 3814.835401] Lustre: server umount lustre-MDT0001 complete [ 3828.977630] LDISKFS-fs (dm-1): recovery complete [ 3828.979540] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3830.145137] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3834.364620] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:1731 to 0x280000400:1761) [ 3834.364804] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:1700 to 0x2c0000400:1729) [ 3835.202777] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 3835.831489] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3841.050416] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3842.660010] Lustre: DEBUG MARKER: test_70b fail mds1 11 times [ 3843.402328] Lustre: Failing over lustre-MDT0000 [ 3843.489158] Lustre: server umount lustre-MDT0000 complete [ 3844.077600] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3844.080198] LustreError: Skipped 9 previous similar messages [ 3857.718468] LDISKFS-fs (dm-0): recovery complete [ 3857.721376] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3859.016075] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 3862.882086] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:5047 to 0x280000401:5249) [ 3862.882091] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:5010 to 0x2c0000401:5185) [ 3863.610827] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3864.126268] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3919.299739] Lustre: DEBUG MARKER: == replay-single test 70c: tar 2mdts recovery ============ 11:33:39 (1713368019) [ 4042.062254] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 4052.546827] Lustre: DEBUG MARKER: test_70c fail mds2 1 times [ 4053.168799] Lustre: Failing over lustre-MDT0001 [ 4053.256192] Lustre: server umount lustre-MDT0001 complete [ 4067.234584] LDISKFS-fs (dm-1): recovery complete [ 4067.237410] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4068.494865] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4072.413645] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:2735 to 0x2c0000400:2753) [ 4072.413819] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:2767 to 0x280000400:2785) [ 4073.220030] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4073.761803] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4197.998528] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4208.605829] Lustre: DEBUG MARKER: test_70c fail mds1 2 times [ 4209.272011] Lustre: Failing over lustre-MDT0000 [ 4209.351362] Lustre: lustre-MDT0000: Not available for connect from 192.168.202.34@tcp (stopping) [ 4212.590068] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4212.596360] Lustre: Skipped 41 previous similar messages [ 4215.251354] Lustre: server umount lustre-MDT0000 complete [ 4217.629964] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4217.637205] LustreError: Skipped 163 previous similar messages [ 4229.688198] LDISKFS-fs (dm-0): recovery complete [ 4229.689453] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4229.720103] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4229.722487] LustreError: Skipped 5 previous similar messages [ 4229.775328] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4229.776824] Lustre: Skipped 11 previous similar messages [ 4229.782506] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4229.786207] Lustre: Skipped 11 previous similar messages [ 4230.813820] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4233.805221] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4233.811151] Lustre: Skipped 11 previous similar messages [ 4234.783367] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.202.134@tcp (at 0@lo) [ 4234.788073] Lustre: Skipped 41 previous similar messages [ 4237.454469] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 4237.458750] Lustre: Skipped 11 previous similar messages [ 4237.485792] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:8133 to 0x280000401:8161) [ 4237.485800] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:8069 to 0x2c0000401:8097) [ 4238.330270] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4238.904221] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4270.033737] Lustre: DEBUG MARKER: == replay-single test 70d: mkdir/rmdir striped dir 2mdts recovery ========================================================== 11:39:30 (1713368370) [ 4392.487840] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 4403.035441] Lustre: DEBUG MARKER: test_70d fail mds2 1 times [ 4403.710323] Lustre: Failing over lustre-MDT0001 [ 4403.719314] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 4403.721851] Lustre: Skipped 5 previous similar messages [ 4403.805532] Lustre: server umount lustre-MDT0001 complete [ 4405.037842] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 4405.042851] LustreError: Skipped 2 previous similar messages [ 4417.343321] LDISKFS-fs (dm-1): recovery complete [ 4417.345488] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4418.223313] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4423.236430] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:3609 to 0x2c0000400:3649) [ 4423.236434] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:3641 to 0x280000400:3681) [ 4423.978499] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4424.437757] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4548.653423] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 4559.273290] Lustre: DEBUG MARKER: test_70d fail mds2 2 times [ 4559.998036] Lustre: Failing over lustre-MDT0001 [ 4560.001448] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 4560.107105] Lustre: server umount lustre-MDT0001 complete [ 4574.514255] LDISKFS-fs (dm-1): recovery complete [ 4574.516580] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4575.704371] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4580.689235] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:3986 to 0x280000400:4001) [ 4580.698418] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:3953 to 0x2c0000400:3969) [ 4581.488497] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 4582.081609] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4588.053699] Lustre: DEBUG MARKER: == replay-single test 70e: rename cross-MDT with random fails ========================================================== 11:44:48 (1713368688) [ 4710.226986] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4720.679212] Lustre: DEBUG MARKER: test_70e fail mds1 1 times [ 4721.378770] Lustre: Failing over lustre-MDT0000 [ 4721.405645] Lustre: lustre-MDT0000: Not available for connect from 192.168.202.34@tcp (stopping) [ 4721.471421] Lustre: server umount lustre-MDT0000 complete [ 4734.809293] LDISKFS-fs (dm-0): recovery complete [ 4734.810851] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4735.593327] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4742.512534] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:10471 to 0x2c0000401:10497) [ 4742.512544] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:10534 to 0x280000401:10561) [ 4743.082770] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4743.466871] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4866.801347] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4877.172018] Lustre: DEBUG MARKER: test_70e fail mds1 2 times [ 4877.659488] Lustre: Failing over lustre-MDT0000 [ 4877.722197] Lustre: server umount lustre-MDT0000 complete [ 4877.805638] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4877.809197] Lustre: Skipped 13 previous similar messages [ 4877.811265] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4877.815363] LustreError: Skipped 44 previous similar messages [ 4890.554844] LDISKFS-fs (dm-0): recovery complete [ 4890.557628] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4890.587844] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4890.591059] LustreError: Skipped 1 previous similar message [ 4890.642191] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4890.643887] Lustre: Skipped 3 previous similar messages [ 4890.650358] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4890.652097] Lustre: Skipped 3 previous similar messages [ 4890.860636] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4890.863302] Lustre: Skipped 3 previous similar messages [ 4891.355756] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4895.647145] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 4895.649407] Lustre: Skipped 13 previous similar messages [ 4898.248861] Lustre: lustre-MDT0000: Recovery over after 0:07, of 2 clients 2 recovered and 0 were evicted. [ 4898.250861] Lustre: Skipped 3 previous similar messages [ 4898.274465] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:12826 to 0x280000401:12865) [ 4898.274545] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12763 to 0x2c0000401:12801) [ 4898.815230] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4899.182053] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4903.285398] Lustre: DEBUG MARKER: == replay-single test 70f: OSS O_DIRECT recovery with 1 clients ========================================================== 11:50:04 (1713369004) [ 4908.330849] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4909.709804] Lustre: DEBUG MARKER: test_70f failing OST 1 times [ 4910.181932] Lustre: Failing over lustre-OST0000 [ 4910.201618] Lustre: server umount lustre-OST0000 complete [ 4922.972123] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4923.083222] LDISKFS-fs (dm-2): recovery complete [ 4923.084501] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4924.212123] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4926.438020] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4926.806610] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4934.873434] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4936.226042] Lustre: DEBUG MARKER: test_70f failing OST 2 times [ 4936.681119] Lustre: Failing over lustre-OST0000 [ 4936.713114] Lustre: server umount lustre-OST0000 complete [ 4949.491594] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4949.514185] LDISKFS-fs (dm-2): recovery complete [ 4949.515276] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4950.649486] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4952.901047] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4953.267905] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4961.466247] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4962.854218] Lustre: DEBUG MARKER: test_70f failing OST 3 times [ 4963.367707] Lustre: Failing over lustre-OST0000 [ 4963.385682] Lustre: server umount lustre-OST0000 complete [ 4976.378479] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 4976.400332] LDISKFS-fs (dm-2): recovery complete [ 4976.401686] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4977.685124] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 4980.065906] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4980.467434] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4988.742557] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 4990.136155] Lustre: DEBUG MARKER: test_70f failing OST 4 times [ 4990.605416] Lustre: Failing over lustre-OST0000 [ 4990.622669] Lustre: server umount lustre-OST0000 complete [ 5003.467304] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5003.490719] LDISKFS-fs (dm-2): recovery complete [ 5003.492299] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5004.686810] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5006.909543] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5007.266666] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5015.359545] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 5016.717856] Lustre: DEBUG MARKER: test_70f failing OST 5 times [ 5017.192489] Lustre: Failing over lustre-OST0000 [ 5017.209078] Lustre: server umount lustre-OST0000 complete [ 5030.083794] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5030.107286] LDISKFS-fs (dm-2): recovery complete [ 5030.108515] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5031.328726] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5033.590101] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5033.943548] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5040.166496] Lustre: DEBUG MARKER: == replay-single test 71a: mkdir/rmdir striped dir with 2 mdts recovery ========================================================== 11:52:21 (1713369141) [ 5162.314566] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5164.324498] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5174.699332] Lustre: DEBUG MARKER: fail mds2 mds1 1 times [ 5175.185061] Lustre: Failing over lustre-MDT0001 [ 5175.292165] Lustre: server umount lustre-MDT0001 complete [ 5176.093688] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 5176.096529] LustreError: Skipped 2 previous similar messages [ 5176.219771] Lustre: Failing over lustre-MDT0000 [ 5176.366505] Lustre: server umount lustre-MDT0000 complete [ 5189.520938] LDISKFS-fs (dm-1): recovery complete [ 5189.525600] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5189.592038] LDISKFS-fs (dm-0): recovery complete [ 5189.593750] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5189.628348] Lustre: Evicted from MGS (at 192.168.202.134@tcp) after server handle changed from 0x0 to 0x718eae1b845bd82f [ 5190.530590] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5190.564984] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5195.722709] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:3977 to 0x2c0000400:4001) [ 5195.723060] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4009 to 0x280000400:4033) [ 5201.131595] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12863 to 0x2c0000401:12897) [ 5201.132232] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:12928 to 0x280000401:12961) [ 5201.745065] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5202.155714] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5202.522625] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5325.786170] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5327.759768] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5338.154397] Lustre: DEBUG MARKER: fail mds2 mds1 2 times [ 5338.667814] Lustre: Failing over lustre-MDT0001 [ 5338.776550] Lustre: server umount lustre-MDT0001 complete [ 5339.638386] Lustre: 3492:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369287/real 1713369287] req@ffff88009f678e00 x1796592525975680/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713369303 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5339.768669] Lustre: Failing over lustre-MDT0000 [ 5339.804057] LustreError: 5519:0:(ldlm_resource.c:1128:ldlm_resource_complain()) lustre-MDT0001-osp-MDT0000: namespace resource [0x240004e38:0x13eb:0x0].0x0 (ffff88009b600300) refcount nonzero (2) after lock cleanup; forcing cleanup. [ 5339.820366] Lustre: lustre-MDT0000: Not available for connect from 192.168.202.34@tcp (stopping) [ 5339.822648] Lustre: Skipped 4 previous similar messages [ 5339.937490] Lustre: server umount lustre-MDT0000 complete [ 5353.191363] LDISKFS-fs (dm-1): recovery complete [ 5353.195524] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5353.245630] LDISKFS-fs (dm-0): recovery complete [ 5353.247154] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5354.147050] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5354.201315] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5356.401399] Lustre: 3493:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369442/real 1713369442] req@ffff88009c141180 x1796592532002176/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713369458 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5356.410522] Lustre: 3493:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 5358.550211] LustreError: 9210:0:(ldlm_lockd.c:968:ldlm_server_blocking_ast()) ### BUG 6063: lock collide during recovery ns: mdt-lustre-MDT0001_UUID lock: ffff880099e49f80/0x718eae1b846bc983 lrc: 3/0,0 mode: PW/PW res: [0x240004e38:0x1200:0x0].0x0 bits 0x2/0x0 rrc: 3 type: IBT gid 0 flags: 0x40000000000020 nid: 0@lo remote: 0x718eae1b846bc97c expref: 55 pid: 9210 timeout: 0 lvb_type: 0 [ 5359.364614] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4009 to 0x280000400:4065) [ 5359.364773] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:3977 to 0x2c0000400:4033) [ 5364.350632] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:12928 to 0x280000401:12993) [ 5364.350635] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12863 to 0x2c0000401:12929) [ 5364.949141] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5365.349842] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5365.725057] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5367.933258] Lustre: 3495:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369452/real 1713369452] req@ffff88009b943480 x1796592532002816/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713369468 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5367.940781] Lustre: 3495:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 5370.202789] Lustre: DEBUG MARKER: == replay-single test 73a: open(O_CREAT), unlink, replay, reconnect before open replay, close ========================================================== 11:57:50 (1713369470) [ 5372.333022] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5373.119138] Lustre: Failing over lustre-MDT0000 [ 5373.185153] Lustre: server umount lustre-MDT0000 complete [ 5386.276095] LDISKFS-fs (dm-0): recovery complete [ 5386.278234] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5386.637562] Lustre: *** cfs_fail_loc=302, val=2147483648*** [ 5387.186720] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5402.655844] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:53 [ 5402.664309] Lustre: 6982:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009a5f1500 x1796592548228288/t356482286204(356482286204) o101->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:450/0 lens 520/3488 e 0 to 0 dl 1713369515 ref 1 fl Interpret:/206/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5402.694024] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:12928 to 0x280000401:13025) [ 5402.694026] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12931 to 0x2c0000401:12961) [ 5403.292301] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5403.695116] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5408.144781] Lustre: DEBUG MARKER: == replay-single test 73b: open(O_CREAT), unlink, replay, reconnect at open_replay reply, close ========================================================== 11:58:28 (1713369508) [ 5410.233930] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5411.003045] Lustre: Failing over lustre-MDT0000 [ 5411.059854] Lustre: server umount lustre-MDT0000 complete [ 5424.226305] LDISKFS-fs (dm-0): recovery complete [ 5424.227672] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5425.125997] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5426.704731] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 5426.706458] LustreError: 6982:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880085236d80 x1796592548228288/t356482286204(356482286204) o101->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:474/0 lens 520/664 e 0 to 0 dl 1713369539 ref 1 fl Interpret:/204/0 rc 301/0 job:'lfs.0' uid:0 gid:0 [ 5442.720509] Lustre: lustre-MDT0000: Client fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:53 [ 5442.725833] Lustre: 6956:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88011dd0ea00 x1796592548228288/t356482286204(356482286204) o101->fbdb2e80-87e3-4a1f-978d-1d5e4d53aee4@192.168.202.34@tcp:490/0 lens 520/3488 e 0 to 0 dl 1713369555 ref 1 fl Interpret:/206/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5442.759151] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13057) [ 5442.759154] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12931 to 0x2c0000401:12993) [ 5443.339958] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5443.751573] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5448.217169] Lustre: DEBUG MARKER: == replay-single test 74: Ensure applications don't fail waiting for OST recovery ========================================================== 11:59:09 (1713369549) [ 5449.214504] Lustre: Failing over lustre-OST0000 [ 5449.235024] Lustre: server umount lustre-OST0000 complete [ 5450.226040] Lustre: Failing over lustre-MDT0000 [ 5450.282212] Lustre: server umount lustre-MDT0000 complete [ 5462.335873] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5463.169311] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5463.815984] Lustre: lustre-MDT0000: Denying connection for new client f6c17b7d-048f-45d1-91f2-a57c0d13bd9a (at 192.168.202.34@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 5467.437834] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12931 to 0x2c0000401:13025) [ 5470.338323] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5470.341959] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5471.616496] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5472.177965] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13089) [ 5476.475304] Lustre: DEBUG MARKER: == replay-single test 80a: DNE: create remote dir, drop update rep from MDT0, fail MDT0 ========================================================== 11:59:37 (1713369577) [ 5476.738442] Lustre: *** cfs_fail_loc=1701, val=2147483648*** [ 5476.740438] LustreError: 5743:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009c140a80 x1796592532717120/t377957122055(0) o1000->lustre-MDT0001-mdtlov_UUID@0@lo:524/0 lens 1056/4320 e 0 to 0 dl 1713369589 ref 1 fl Interpret:/200/0 rc 0/0 job:'osp_up0-1.0' uid:0 gid:0 [ 5478.681815] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5479.177711] Lustre: Failing over lustre-MDT0000 [ 5479.232431] Lustre: server umount lustre-MDT0000 complete [ 5480.397657] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5480.397816] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5480.397818] LustreError: Skipped 114 previous similar messages [ 5480.405490] Lustre: Skipped 38 previous similar messages [ 5492.259234] LDISKFS-fs (dm-0): recovery complete [ 5492.260675] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5492.288224] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5492.292332] LustreError: Skipped 5 previous similar messages [ 5492.353552] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5492.355508] Lustre: Skipped 13 previous similar messages [ 5492.362923] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5492.364969] Lustre: Skipped 13 previous similar messages [ 5492.736314] Lustre: 6980:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369578/real 1713369578] req@ffff88009c140700 x1796592532717120/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1056/4320 e 0 to 1 dl 1713369594 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 5492.743851] Lustre: 6980:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [ 5493.110755] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5493.868753] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5493.871323] Lustre: Skipped 13 previous similar messages [ 5497.358918] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to (at 0@lo) [ 5497.361583] Lustre: Skipped 39 previous similar messages [ 5497.380847] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13121) [ 5497.380850] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13057) [ 5497.967576] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5498.347875] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5502.898327] Lustre: DEBUG MARKER: == replay-single test 80b: DNE: create remote dir, drop update rep from MDT0, fail MDT1 ========================================================== 12:00:03 (1713369603) [ 5505.173694] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5507.213656] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5507.722843] Lustre: Failing over lustre-MDT0001 [ 5507.726023] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 5513.205481] Lustre: server umount lustre-MDT0001 complete [ 5526.251638] LDISKFS-fs (dm-1): recovery complete [ 5526.253427] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5527.106145] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5531.345751] Lustre: lustre-MDT0001: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 5531.349224] Lustre: Skipped 14 previous similar messages [ 5531.349534] Lustre: 6954:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88007e20a680 x1796592556521344/t47244642812(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:599/0 lens 560/2880 e 0 to 0 dl 1713369664 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5531.369630] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4044 to 0x2c0000400:4065) [ 5531.369632] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4076 to 0x280000400:4097) [ 5531.930569] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5532.300101] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5536.773220] Lustre: DEBUG MARKER: == replay-single test 80c: DNE: create remote dir, drop update rep from MDT1, fail MDT[0,1] ========================================================== 12:00:37 (1713369637) [ 5537.042047] LustreError: 9209:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880080853800 x1796592532746816/t382252089376(0) o1000->lustre-MDT0001-mdtlov_UUID@0@lo:584/0 lens 2264/4320 e 0 to 0 dl 1713369649 ref 1 fl Interpret:/200/0 rc 0/0 job:'osp_up0-1.0' uid:0 gid:0 [ 5537.049434] LustreError: 9209:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 5538.989200] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5540.978673] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5541.470713] Lustre: Failing over lustre-MDT0000 [ 5541.523632] Lustre: server umount lustre-MDT0000 complete [ 5553.041233] Lustre: 21778:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369638/real 1713369638] req@ffff880080851880 x1796592532746816/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 2264/4320 e 0 to 1 dl 1713369654 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 5554.446557] LDISKFS-fs (dm-0): recovery complete [ 5554.447888] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5555.261403] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5559.569821] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13153) [ 5559.569876] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13089) [ 5560.117004] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5560.506373] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5562.227075] Lustre: Failing over lustre-MDT0001 [ 5562.309495] Lustre: server umount lustre-MDT0001 complete [ 5575.301743] LDISKFS-fs (dm-1): recovery complete [ 5575.302996] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5576.138997] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5580.417004] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4108 to 0x280000400:4129) [ 5580.417006] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4076 to 0x2c0000400:4097) [ 5580.968254] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5581.334286] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5585.796733] Lustre: DEBUG MARKER: == replay-single test 80d: DNE: create remote dir, drop update rep from MDT1, fail 2 MDTs ========================================================== 12:01:26 (1713369686) [ 5586.062983] Lustre: *** cfs_fail_loc=1701, val=2147483648*** [ 5586.064996] Lustre: Skipped 2 previous similar messages [ 5591.012324] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5592.990489] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5593.530030] Lustre: Failing over lustre-MDT0000 [ 5593.585026] Lustre: server umount lustre-MDT0000 complete [ 5594.573989] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713369696 with bad export cookie 8182669006187848963 [ 5594.575775] Lustre: Failing over lustre-MDT0001 [ 5594.580686] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5600.141858] LustreError: 29151:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800a0d85f80 x1796592532775808/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 304/4320 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'umount.0' uid:0 gid:0 [ 5600.146280] LustreError: 29151:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0000-osp-MDT0001: osp_attr_get update error [0x200000401:0x1:0x0]: rc = -5 [ 5600.250235] Lustre: server umount lustre-MDT0001 complete [ 5613.268347] LDISKFS-fs (dm-1): recovery complete [ 5613.270715] LDISKFS-fs (dm-0): recovery complete [ 5613.270925] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5613.279460] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5619.429580] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff88009dff4380 x1796592532776448/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5620.297905] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5620.358963] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5625.626267] Lustre: 31417:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009c140700 x1796592556553792/t55834574916(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:694/0 lens 560/2880 e 0 to 0 dl 1713369759 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5625.634612] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4108 to 0x2c0000400:4129) [ 5625.634632] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4140 to 0x280000400:4161) [ 5638.095834] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13185) [ 5638.095836] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13121) [ 5638.641693] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5638.999343] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5639.336945] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5643.804220] Lustre: DEBUG MARKER: == replay-single test 80e: DNE: create remote dir, drop MDT1 rep, fail MDT0 ========================================================== 12:02:24 (1713369744) [ 5644.114003] LustreError: 31417:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88007e209c00 x1796592556570752/t60129542212(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:691/0 lens 560/448 e 0 to 0 dl 1713369756 ref 1 fl Interpret:/200/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5644.122034] LustreError: 31417:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 5649.153904] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5649.828820] Lustre: Failing over lustre-MDT0000 [ 5649.911667] Lustre: server umount lustre-MDT0000 complete [ 5660.110779] Lustre: lustre-MDT0001: Client f6c17b7d-048f-45d1-91f2-a57c0d13bd9a (at 192.168.202.34@tcp) reconnecting [ 5660.116651] Lustre: Skipped 2 previous similar messages [ 5660.121551] Lustre: 30589:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880099e79f80 x1796592556570752/t60129542212(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:707/0 lens 560/2880 e 0 to 0 dl 1713369772 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5663.263967] LDISKFS-fs (dm-0): recovery complete [ 5663.266492] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5664.219318] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5668.420595] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13153) [ 5668.420626] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13217) [ 5669.024447] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5669.392129] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5674.466688] Lustre: DEBUG MARKER: == replay-single test 80f: DNE: create remote dir, drop MDT1 rep, fail MDT1 ========================================================== 12:02:55 (1713369775) [ 5677.550743] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5678.198672] Lustre: Failing over lustre-MDT0001 [ 5678.274812] Lustre: server umount lustre-MDT0001 complete [ 5691.437504] LDISKFS-fs (dm-1): recovery complete [ 5691.438667] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5692.235879] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5696.543986] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4182 to 0x280000400:4225) [ 5696.543993] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4150 to 0x2c0000400:4193) [ 5697.118817] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5697.507581] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5702.763921] Lustre: DEBUG MARKER: == replay-single test 80g: DNE: create remote dir, drop MDT1 rep, fail MDT0, then MDT1 ========================================================== 12:03:23 (1713369803) [ 5707.974091] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5709.962267] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5710.490852] Lustre: Failing over lustre-MDT0000 [ 5710.548997] Lustre: server umount lustre-MDT0000 complete [ 5723.664622] LDISKFS-fs (dm-0): recovery complete [ 5723.665739] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5724.554391] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5728.770626] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13185) [ 5728.770631] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13249) [ 5729.349485] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5729.725563] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5731.444840] Lustre: Failing over lustre-MDT0001 [ 5731.501655] Lustre: server umount lustre-MDT0001 complete [ 5744.592269] LDISKFS-fs (dm-1): recovery complete [ 5744.594833] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5745.588913] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5749.734954] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4204 to 0x2c0000400:4225) [ 5749.734967] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4236 to 0x280000400:4257) [ 5750.310078] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5750.723036] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5756.063032] Lustre: DEBUG MARKER: == replay-single test 80h: DNE: create remote dir, drop MDT1 rep, fail 2 MDTs ========================================================== 12:04:16 (1713369856) [ 5756.428588] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 5756.431189] Lustre: Skipped 3 previous similar messages [ 5762.102381] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5764.714650] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5765.466316] Lustre: Failing over lustre-MDT0000 [ 5765.547379] Lustre: server umount lustre-MDT0000 complete [ 5766.894026] LustreError: 6918:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713369868 with bad export cookie 8182669006187862599 [ 5766.896087] Lustre: Failing over lustre-MDT0001 [ 5766.904447] LustreError: 6918:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 5767.008199] Lustre: server umount lustre-MDT0001 complete [ 5780.538271] LDISKFS-fs (dm-1): recovery complete [ 5780.540079] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5780.544013] LDISKFS-fs (dm-0): recovery complete [ 5780.545530] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5785.901356] Lustre: 3495:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369871/real 1713369871] req@ffff8800a09c4700 x1796592532853632/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713369887 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5785.917582] Lustre: 3495:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 5791.909771] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff8800a5b0aa00 x1796592532855360/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5792.058800] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_connect to node 0@lo failed: rc = -114 [ 5792.062920] LustreError: Skipped 12 previous similar messages [ 5792.770717] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5792.805005] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5798.011934] Lustre: 12941:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009c140700 x1796592556618880/t68719476804(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:90/0 lens 560/2880 e 0 to 0 dl 1713369910 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 5798.021530] Lustre: 12941:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 2 previous similar messages [ 5798.025284] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4236 to 0x2c0000400:4257) [ 5798.025429] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4268 to 0x280000400:4289) [ 5798.038015] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13281) [ 5798.038023] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13217) [ 5798.822726] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5799.279747] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5799.811589] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5805.175843] Lustre: DEBUG MARKER: == replay-single test 81a: DNE: unlink remote dir, drop MDT0 update rep, fail MDT1 ========================================================== 12:05:05 (1713369905) [ 5805.632495] LustreError: 9209:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012f97d880 x1796592532868800/t403726925838(0) o1000->lustre-MDT0001-mdtlov_UUID@0@lo:98/0 lens 1488/4320 e 0 to 0 dl 1713369918 ref 1 fl Interpret:/200/0 rc 0/0 job:'osp_up0-1.0' uid:0 gid:0 [ 5805.646488] LustreError: 9209:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 3 previous similar messages [ 5808.066553] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5808.628286] Lustre: Failing over lustre-MDT0001 [ 5808.634295] Lustre: lustre-MDT0001: Not available for connect from 192.168.202.34@tcp (stopping) [ 5808.638883] Lustre: Skipped 8 previous similar messages [ 5814.228992] Lustre: server umount lustre-MDT0001 complete [ 5827.971918] LDISKFS-fs (dm-1): recovery complete [ 5827.974832] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5829.076011] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5833.100737] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4289) [ 5833.100743] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4300 to 0x280000400:4321) [ 5833.648046] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5834.116764] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5840.224510] Lustre: DEBUG MARKER: == replay-single test 81b: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0 ========================================================== 12:05:40 (1713369940) [ 5843.371411] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5844.050845] Lustre: Failing over lustre-MDT0000 [ 5844.123876] Lustre: server umount lustre-MDT0000 complete [ 5856.640315] Lustre: 16158:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713369942/real 1713369942] req@ffff88012ff3d500 x1796592532885440/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1488/4320 e 0 to 1 dl 1713369958 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 5856.652908] Lustre: 16158:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 5858.271251] LDISKFS-fs (dm-0): recovery complete [ 5858.272508] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5859.525568] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5863.411072] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13313) [ 5863.411076] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13249) [ 5864.219997] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5864.835035] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5870.950060] Lustre: DEBUG MARKER: == replay-single test 81c: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0,MDT1 ========================================================== 12:06:11 (1713369971) [ 5873.333445] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5875.346266] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5875.878834] Lustre: Failing over lustre-MDT0000 [ 5875.937292] Lustre: server umount lustre-MDT0000 complete [ 5889.381155] LDISKFS-fs (dm-0): recovery complete [ 5889.383850] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5890.648007] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5894.538831] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13281) [ 5894.538867] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13345) [ 5895.131970] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5895.517079] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5897.293757] Lustre: Failing over lustre-MDT0001 [ 5897.345197] Lustre: server umount lustre-MDT0001 complete [ 5910.624164] LDISKFS-fs (dm-1): recovery complete [ 5910.625347] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5911.496631] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5915.742915] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4300 to 0x280000400:4353) [ 5915.742918] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4321) [ 5916.542374] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5917.072438] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5922.073223] Lustre: DEBUG MARKER: == replay-single test 81d: DNE: unlink remote dir, drop MDT0 update reply, fail 2 MDTs ========================================================== 12:07:02 (1713370022) [ 5924.968682] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5927.802685] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 5928.585860] Lustre: Failing over lustre-MDT0000 [ 5928.667580] Lustre: server umount lustre-MDT0000 complete [ 5930.062060] LustreError: 8072:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713370031 with bad export cookie 8182669006187872945 [ 5930.064337] Lustre: Failing over lustre-MDT0001 [ 5930.071942] LustreError: 8072:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 5935.150554] LustreError: 26270:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff88012b66dc00 x1796592532924288/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 304/4320 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'umount.0' uid:0 gid:0 [ 5935.162156] LustreError: 26270:0:(osp_object.c:637:osp_attr_get()) lustre-MDT0000-osp-MDT0001: osp_attr_get update error [0x200000401:0x1:0x0]: rc = -5 [ 5935.301896] Lustre: server umount lustre-MDT0001 complete [ 5949.562001] LDISKFS-fs (dm-1): recovery complete [ 5949.564966] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5949.583616] LDISKFS-fs (dm-0): recovery complete [ 5949.590365] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5954.742859] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff880070ff1180 x1796592532924800/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5955.959569] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5956.024640] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5959.998818] Lustre: 27697:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009a560380 x1796592556657792/t81604378626(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:277/0 lens 496/2888 e 0 to 0 dl 1713370097 ref 1 fl Interpret:/202/0 rc 0/0 job:'rmdir.0' uid:0 gid:0 [ 5960.000589] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13313) [ 5960.001876] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13377) [ 5960.014987] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4353) [ 5960.015013] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4300 to 0x280000400:4385) [ 5960.036264] Lustre: 27697:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 5960.840695] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 5961.421231] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5961.971698] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5968.103533] Lustre: DEBUG MARKER: == replay-single test 81e: DNE: unlink remote dir, drop MDT1 req reply, fail MDT0 ========================================================== 12:07:48 (1713370068) [ 5971.428829] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5972.087215] Lustre: Failing over lustre-MDT0000 [ 5972.176327] Lustre: server umount lustre-MDT0000 complete [ 5984.533021] Lustre: lustre-MDT0001: Client f6c17b7d-048f-45d1-91f2-a57c0d13bd9a (at 192.168.202.34@tcp) reconnecting [ 5984.537824] Lustre: Skipped 1 previous similar message [ 5986.160097] LDISKFS-fs (dm-0): recovery complete [ 5986.162965] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5987.221575] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5991.305688] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13409) [ 5991.305717] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13345) [ 5991.998246] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5992.614441] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5997.969063] Lustre: DEBUG MARKER: == replay-single test 81f: DNE: unlink remote dir, drop MDT1 req reply, fail MDT1 ========================================================== 12:08:18 (1713370098) [ 6000.899775] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 6001.500717] Lustre: Failing over lustre-MDT0001 [ 6001.559973] Lustre: server umount lustre-MDT0001 complete [ 6015.599260] LDISKFS-fs (dm-1): recovery complete [ 6015.601634] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6016.786407] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6020.733961] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4300 to 0x280000400:4417) [ 6020.733964] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4385) [ 6021.491113] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6022.097120] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6027.990454] Lustre: DEBUG MARKER: == replay-single test 81g: DNE: unlink remote dir, drop req reply, fail M0, then M1 ========================================================== 12:08:48 (1713370128) [ 6028.436782] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 6028.439812] Lustre: Skipped 6 previous similar messages [ 6031.343033] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6034.326377] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 6035.072945] Lustre: Failing over lustre-MDT0000 [ 6035.134519] Lustre: server umount lustre-MDT0000 complete [ 6049.585025] LDISKFS-fs (dm-0): recovery complete [ 6049.587683] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6050.902090] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6054.752283] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13377) [ 6054.752476] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13441) [ 6055.582242] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6056.183749] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6058.197986] Lustre: Failing over lustre-MDT0001 [ 6058.289071] Lustre: server umount lustre-MDT0001 complete [ 6072.634612] LDISKFS-fs (dm-1): recovery complete [ 6072.638042] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6073.861436] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6077.773891] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4300 to 0x280000400:4449) [ 6077.773895] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4417) [ 6078.552331] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6079.107745] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6085.179591] Lustre: DEBUG MARKER: == replay-single test 81h: DNE: unlink remote dir, drop request reply, fail 2 MDTs ========================================================== 12:09:45 (1713370185) [ 6085.576293] LustreError: 27697:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880099d7ce00 x1796592556686656/t94489280514(0) o36->f6c17b7d-048f-45d1-91f2-a57c0d13bd9a@192.168.202.34@tcp:378/0 lens 496/456 e 0 to 0 dl 1713370198 ref 1 fl Interpret:/200/0 rc 0/0 job:'rmdir.0' uid:0 gid:0 [ 6085.584361] LustreError: 27697:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 6 previous similar messages [ 6088.311940] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6091.241890] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 6092.033266] Lustre: Failing over lustre-MDT0000 [ 6092.113713] Lustre: server umount lustre-MDT0000 complete [ 6092.751010] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6092.759703] Lustre: Skipped 65 previous similar messages [ 6092.762674] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6092.770856] LustreError: Skipped 240 previous similar messages [ 6093.463384] LustreError: 6917:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713370195 with bad export cookie 8182669006187880442 [ 6093.465971] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6093.465973] LustreError: Skipped 10 previous similar messages [ 6093.466239] Lustre: Failing over lustre-MDT0001 [ 6093.481207] LustreError: 6917:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 6093.613521] Lustre: server umount lustre-MDT0001 complete [ 6107.965935] LDISKFS-fs (dm-1): recovery complete [ 6107.968527] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6107.974137] LDISKFS-fs (dm-0): recovery complete [ 6107.978494] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6113.781283] Lustre: 3495:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713370199/real 1713370199] req@ffff8800a0dc3480 x1796592532991552/t0(0) o400->lustre-MDT0001-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713370215 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 6113.781291] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713370199/real 1713370199] req@ffff8800a0dc2d80 x1796592532991424/t0(0) o400->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713370215 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 6113.781298] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 6118.790229] Lustre: Evicted from MGS (at 192.168.202.134@tcp) after server handle changed from 0x0 to 0x718eae1b846d69a8 [ 6118.795831] Lustre: MGC192.168.202.134@tcp: Connection restored to (at 0@lo) [ 6118.799416] Lustre: Skipped 73 previous similar messages [ 6118.886553] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 6118.892013] Lustre: Skipped 21 previous similar messages [ 6118.903392] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 6118.909008] Lustre: Skipped 21 previous similar messages [ 6119.596546] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 6119.599676] Lustre: Skipped 22 previous similar messages [ 6119.680528] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6119.718413] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6124.933005] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4300 to 0x280000400:4481) [ 6124.933291] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4449) [ 6124.958112] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13027 to 0x2c0000401:13409) [ 6124.958120] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13027 to 0x280000401:13473) [ 6125.859993] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6126.434052] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6126.961803] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6132.336091] Lustre: DEBUG MARKER: == replay-single test 84a: stale open during export disconnect ========================================================== 12:10:32 (1713370232) [ 6132.924947] Lustre: 11917:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting f6c17b7d-048f-45d1-91f2-a57c0d13bd9a at adminstrative request [ 6139.525921] Lustre: DEBUG MARKER: == replay-single test 85a: check the cancellation of unused locks during recovery(IBITS) ========================================================== 12:10:40 (1713370240) [ 6140.964613] Lustre: Failing over lustre-MDT0000 [ 6141.019253] Lustre: server umount lustre-MDT0000 complete [ 6153.483511] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6154.644005] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6158.641587] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 6158.645217] Lustre: Skipped 22 previous similar messages [ 6158.664267] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13460 to 0x2c0000401:13505) [ 6158.664269] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13525 to 0x280000401:13569) [ 6159.288124] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6159.692311] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6164.504041] Lustre: DEBUG MARKER: == replay-single test 85b: check the cancellation of unused locks during recovery(EXTENT) ========================================================== 12:11:05 (1713370265) [ 6168.505346] Lustre: Failing over lustre-OST0000 [ 6168.546934] Lustre: server umount lustre-OST0000 complete [ 6181.277850] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6181.284986] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6183.032345] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6185.972742] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6186.543315] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 6192.629601] Lustre: DEBUG MARKER: == replay-single test 86: umount server after clear nid_stats should not hit LBUG ========================================================== 12:11:33 (1713370293) [ 6193.863485] Lustre: Failing over lustre-MDT0000 [ 6193.944829] Lustre: server umount lustre-MDT0000 complete [ 6196.213093] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6197.230173] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6197.896652] Lustre: lustre-MDT0000: Denying connection for new client d3c330a4-4b63-4df5-a38e-6cb233ba8681 (at 192.168.202.34@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 6201.330485] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13525 to 0x280000401:13601) [ 6201.330491] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13460 to 0x2c0000401:13537) [ 6207.177401] Lustre: DEBUG MARKER: == replay-single test 87a: write replay ================== 12:11:47 (1713370307) [ 6210.069803] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 6210.884882] Lustre: Failing over lustre-OST0000 [ 6210.915258] Lustre: server umount lustre-OST0000 complete [ 6224.298515] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6224.306015] LDISKFS-fs (dm-2): recovery complete [ 6224.307067] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6225.484691] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6227.735725] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6228.100313] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 6232.390948] Lustre: DEBUG MARKER: == replay-single test 87b: write replay with changed data (checksum resend) ========================================================== 12:12:13 (1713370333) [ 6234.491560] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 6236.245929] Lustre: Failing over lustre-OST0000 [ 6236.265444] Lustre: server umount lustre-OST0000 complete [ 6249.373007] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6249.377730] LDISKFS-fs (dm-2): recovery complete [ 6249.378773] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6250.838621] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6251.061917] LustreError: 168-f: lustre-OST0000: BAD WRITE CHECKSUM: from 12345-192.168.202.34@tcp inode [0x2000320e1:0x5:0x0] object 0x280000401:13603 extent [0-4194303]: client csum 36e43ba5, server csum 1a6a5a07 [ 6253.185633] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6253.620399] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 6258.258849] Lustre: DEBUG MARKER: == replay-single test 88: MDS should not assign same objid to different files ========================================================== 12:12:39 (1713370359) [ 6260.145604] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 6262.105656] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6263.806154] Lustre: Failing over lustre-MDT0000 [ 6263.859819] Lustre: server umount lustre-MDT0000 complete [ 6264.802722] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713370366 with bad export cookie 8182669006187906076 [ 6264.804122] Lustre: Failing over lustre-OST0000 [ 6264.806273] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 6264.816729] Lustre: server umount lustre-OST0000 complete [ 6277.955384] LDISKFS-fs (dm-0): recovery complete [ 6277.956485] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6289.477451] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff88009ea62a00 x1796592533063680/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 6290.358061] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6294.645541] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13460 to 0x2c0000401:13569) [ 6304.511089] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6304.515488] LDISKFS-fs (dm-2): recovery complete [ 6304.516574] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6305.730789] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6314.304102] Lustre: DEBUG MARKER: == replay-single test 89: no disk space leak on late ost connection ========================================================== 12:13:34 (1713370414) [ 6319.111353] Lustre: Failing over lustre-OST0000 [ 6319.155178] Lustre: server umount lustre-OST0000 complete [ 6320.344834] Lustre: Failing over lustre-MDT0000 [ 6320.424893] Lustre: server umount lustre-MDT0000 complete [ 6333.407096] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6334.590460] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6338.544263] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13460 to 0x2c0000401:13601) [ 6340.789305] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6340.795267] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6342.720825] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6343.573748] Lustre: lustre-OST0000: Denying connection for new client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp), waiting for 3 known clients (2 recovered, 0 in progress, and 0 evicted) to recover in 1:07 [ 6348.589529] Lustre: lustre-OST0000: Denying connection for new client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp), waiting for 3 known clients (2 recovered, 0 in progress, and 0 evicted) to recover in 1:02 [ 6358.605414] Lustre: lustre-OST0000: Denying connection for new client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp), waiting for 3 known clients (2 recovered, 0 in progress, and 0 evicted) to recover in 0:52 [ 6358.615122] Lustre: Skipped 1 previous similar message [ 6378.637242] Lustre: lustre-OST0000: Denying connection for new client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp), waiting for 3 known clients (2 recovered, 0 in progress, and 0 evicted) to recover in 0:32 [ 6378.645777] Lustre: Skipped 3 previous similar messages [ 6411.430304] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 6411.431750] Lustre: 30857:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client d3c330a4-4b63-4df5-a38e-6cb233ba8681@ [ 6411.434222] Lustre: 30857:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 6411.436104] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 6411.442265] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13652 to 0x280000401:13673) [ 6414.386374] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 67 sec [ 6423.570005] Lustre: DEBUG MARKER: free_before: 7646308 free_after: 7646308 [ 6426.810328] Lustre: DEBUG MARKER: == replay-single test 90: lfs find identifies the missing striped file segments ========================================================== 12:15:27 (1713370527) [ 6427.889160] Lustre: Failing over lustre-OST0001 [ 6427.918553] Lustre: server umount lustre-OST0001 complete [ 6428.701475] LustreError: 11-0: lustre-OST0001-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 6428.703328] LustreError: Skipped 9 previous similar messages [ 6440.656401] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 6440.661221] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6442.308884] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6448.009616] Lustre: DEBUG MARKER: == replay-single test 93a: replay + reconnect ============ 12:15:48 (1713370548) [ 6449.084466] Lustre: Failing over lustre-OST0000 [ 6449.105296] Lustre: server umount lustre-OST0000 complete [ 6461.122117] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 6461.125549] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6462.303083] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6462.594097] LustreError: 3184:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 sleeping for 40000ms [ 6462.596221] LustreError: 3184:0:(ldlm_lib.c:2829:target_recovery_thread()) Skipped 1 previous similar message [ 6468.333292] Lustre: *** cfs_fail_loc=715, val=40*** [ 6478.592312] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713370564/real 1713370564] req@ffff88009d040380 x1796592533118272/t0(0) o400->lustre-OST0000-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713370580 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 6478.598611] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [ 6478.600864] Lustre: lustre-OST0000: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnected, waiting for 3 clients in recovery for 0:52 [ 6478.605348] Lustre: Skipped 1 previous similar message [ 6484.621382] Lustre: *** cfs_fail_loc=715, val=40*** [ 6484.622894] Lustre: Skipped 2 previous similar messages [ 6494.606575] Lustre: lustre-OST0000: Client lustre-MDT0001-mdtlov_UUID (at 0@lo) reconnected, waiting for 3 clients in recovery for 0:36 [ 6494.610391] Lustre: Skipped 2 previous similar messages [ 6500.621300] Lustre: *** cfs_fail_loc=715, val=40*** [ 6500.623723] Lustre: Skipped 2 previous similar messages [ 6502.597324] LustreError: 3184:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 awake [ 6502.599870] LustreError: 3184:0:(ldlm_lib.c:2829:target_recovery_thread()) Skipped 1 previous similar message [ 6503.156787] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 6503.525094] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 6507.803660] Lustre: DEBUG MARKER: == replay-single test 93b: replay + reconnect on mds ===== 12:16:48 (1713370608) [ 6508.877850] Lustre: Failing over lustre-MDT0000 [ 6508.938034] Lustre: server umount lustre-MDT0000 complete [ 6521.335703] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6522.269252] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6526.448835] LustreError: 5353:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 sleeping for 80000ms [ 6530.909327] Lustre: *** cfs_fail_loc=715, val=80*** [ 6530.910337] Lustre: Skipped 2 previous similar messages [ 6540.896151] Lustre: lustre-MDT0000: Client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:53 [ 6540.900979] Lustre: Skipped 1 previous similar message [ 6542.448696] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 6546.925325] Lustre: *** cfs_fail_loc=715, val=80*** [ 6546.926952] Lustre: Skipped 1 previous similar message [ 6556.904745] Lustre: lustre-MDT0000: Client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:37 [ 6558.452817] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 6562.925321] Lustre: *** cfs_fail_loc=715, val=80*** [ 6562.926766] Lustre: Skipped 1 previous similar message [ 6572.911732] Lustre: lustre-MDT0000: Client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:21 [ 6574.456640] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 6580.461329] Lustre: *** cfs_fail_loc=715, val=80*** [ 6580.464025] Lustre: Skipped 2 previous similar messages [ 6588.918299] Lustre: lustre-MDT0000: Client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:05 [ 6590.460619] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 6604.927977] Lustre: lustre-MDT0000: Recovery already passed deadline 0:10. If you do not want to wait more, you may force taget eviction via 'lctl --device lustre-MDT0000 abort_recovery. [ 6606.452244] LustreError: 5353:0:(ldlm_lib.c:2829:target_recovery_thread()) cfs_fail_timeout id 715 awake [ 6606.456799] Lustre: 5353:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 6606.459486] LustreError: dumping log to /tmp/lustre-log.1713370708.5353 [ 6606.517082] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13687 to 0x280000401:13705) [ 6606.517087] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13614 to 0x2c0000401:13633) [ 6606.962509] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6607.291250] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6611.251532] Lustre: DEBUG MARKER: == replay-single test 100a: DNE: create striped dir, drop update rep from MDT1, fail MDT1 ========================================================== 12:18:32 (1713370712) [ 6611.531162] Lustre: *** cfs_fail_loc=1701, val=2147483648*** [ 6611.533251] Lustre: Skipped 1 previous similar message [ 6611.534469] LustreError: 5743:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012b66df80 x1796592533158976/t98784248321(0) o1000->lustre-MDT0000-mdtlov_UUID@0@lo:149/0 lens 1056/4320 e 0 to 0 dl 1713370724 ref 1 fl Interpret:/200/0 rc 0/0 job:'osp_up1-0.0' uid:0 gid:0 [ 6612.032914] Lustre: Failing over lustre-MDT0001 [ 6612.088032] Lustre: server umount lustre-MDT0001 complete [ 6624.015680] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6624.769861] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6629.117700] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4268 to 0x2c0000400:4481) [ 6629.117703] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4582 to 0x280000400:4609) [ 6629.621659] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6629.974074] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6634.279841] Lustre: DEBUG MARKER: == replay-single test 100b: DNE: create striped dir, fail MDT0 ========================================================== 12:18:55 (1713370735) [ 6635.156052] Lustre: Failing over lustre-MDT0000 [ 6635.224699] Lustre: server umount lustre-MDT0000 complete [ 6647.482521] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6648.595275] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6652.614706] Lustre: 10124:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012ba6f480 x1796592557007296/t450971566138(0) o36->4aa3185e-4241-4670-9bed-d2097befa086@192.168.202.34@tcp:205/0 lens 560/2880 e 0 to 0 dl 1713370780 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 6652.631855] Lustre: 10124:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 4 previous similar messages [ 6652.632939] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13710 to 0x280000401:13737) [ 6652.633895] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13638 to 0x2c0000401:13665) [ 6653.243856] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6653.724784] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6658.772870] Lustre: DEBUG MARKER: == replay-single test 100c: DNE: create striped dir, abort_recov_mdt mds2 ========================================================== 12:19:19 (1713370759) [ 6660.852028] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 6661.352565] Lustre: Failing over lustre-MDT0001 [ 6661.427670] Lustre: server umount lustre-MDT0001 complete [ 6665.206645] LDISKFS-fs (dm-1): recovery complete [ 6665.208659] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6665.291703] Lustre: lustre-MDT0001: Aborting MDT recovery [ 6665.295768] LustreError: 11997:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0000-osp-MDT0001: get update log duration 0, retries 0, failed: rc = -108 [ 6666.041906] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6670.312970] Lustre: lustre-MDT0001-osd: cancel update llog [0x240000400:0x1:0x0] [ 6670.316388] Lustre: lustre-MDT0000-osp-MDT0001: cancel update llog [0x200000401:0x1:0x0] [ 6670.332366] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4494 to 0x2c0000400:4513) [ 6670.332369] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4622 to 0x280000400:4641) [ 6678.232024] Lustre: Failing over lustre-MDT0001 [ 6678.289303] Lustre: server umount lustre-MDT0001 complete [ 6690.549544] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6691.384292] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6695.643796] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4646 to 0x280000400:4673) [ 6695.643827] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4518 to 0x2c0000400:4545) [ 6696.354123] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6696.826674] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6701.106026] Lustre: DEBUG MARKER: == replay-single test 100d: DNE: cancel update logs upon recovery abort ========================================================== 12:20:01 (1713370801) [ 6704.641685] Lustre: Failing over lustre-MDT0001 [ 6705.645771] Lustre: lustre-MDT0001-lwp-OST0001: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 6705.645988] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 6705.645989] Lustre: Skipped 10 previous similar messages [ 6705.652428] Lustre: Skipped 51 previous similar messages [ 6710.198546] Lustre: server umount lustre-MDT0001 complete [ 6710.653739] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 6710.660409] LustreError: Skipped 190 previous similar messages [ 6712.500854] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6712.581259] Lustre: lustre-MDT0001: Aborting client recovery [ 6712.583190] LustreError: 15341:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0001: Aborting recovery [ 6712.583551] LustreError: 15362:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osd: get update log duration 0, retries 0, failed: rc = -108 [ 6712.583584] Lustre: 15364:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 6712.583586] Lustre: 15364:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 6712.591193] Lustre: 15364:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0001: disconnect stale client 4aa3185e-4241-4670-9bed-d2097befa086@ [ 6712.593949] Lustre: lustre-MDT0001: disconnecting 2 stale clients [ 6712.596120] Lustre: lustre-MDT0001-osd: cancel update llog [0x24000c368:0x3:0x0] [ 6712.599253] Lustre: lustre-MDT0000-osp-MDT0001: cancel update llog [0x200034021:0x3:0x0] [ 6712.614376] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4518 to 0x2c0000400:4577) [ 6712.614382] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4646 to 0x280000400:4705) [ 6713.284204] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6717.583230] LustreError: 167-0: lustre-MDT0001-osp-MDT0000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. [ 6725.099835] Lustre: DEBUG MARKER: == replay-single test 100e: DNE: create striped dir on MDT0 and MDT1, fail MDT0, MDT1 ========================================================== 12:20:25 (1713370825) [ 6726.960532] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6728.889027] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 6729.457699] Lustre: Failing over lustre-MDT0000 [ 6729.506934] Lustre: server umount lustre-MDT0000 complete [ 6730.462767] LustreError: 6917:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713370832 with bad export cookie 8182669006187923289 [ 6730.464047] Lustre: Failing over lustre-MDT0001 [ 6730.464360] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 6730.464361] LustreError: Skipped 6 previous similar messages [ 6730.469149] LustreError: 6917:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 6730.579336] Lustre: server umount lustre-MDT0001 complete [ 6743.405503] LDISKFS-fs (dm-1): recovery complete [ 6743.406922] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6743.438334] LDISKFS-fs (dm-0): recovery complete [ 6743.439723] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6748.469260] Lustre: 3492:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713370834/real 1713370834] req@ffff8800a0dc1880 x1796592533307648/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713370850 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 6748.476285] Lustre: 3492:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 10 previous similar messages [ 6755.477473] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff88009f7dc700 x1796592533309376/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 6755.534238] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 6755.536423] Lustre: Skipped 18 previous similar messages [ 6755.542768] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 6755.544630] Lustre: Skipped 20 previous similar messages [ 6756.195003] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6756.232993] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6756.284282] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 6756.286351] Lustre: Skipped 17 previous similar messages [ 6760.608575] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to (at 0@lo) [ 6760.614858] Lustre: Skipped 54 previous similar messages [ 6761.547058] Lustre: lustre-MDT0001: Recovery over after 0:06, of 2 clients 2 recovered and 0 were evicted. [ 6761.553357] Lustre: Skipped 15 previous similar messages [ 6761.578282] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4646 to 0x280000400:4737) [ 6761.578301] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4518 to 0x2c0000400:4609) [ 6761.713928] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13676 to 0x2c0000401:13697) [ 6761.713933] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13748 to 0x280000401:13769) [ 6762.461033] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6762.969504] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6763.445813] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6769.430441] Lustre: DEBUG MARKER: == replay-single test 101: Shouldn't reassign precreated objs to other files after recovery ========================================================== 12:21:10 (1713370870) [ 6772.146826] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6779.359897] Lustre: Failing over lustre-MDT0000 [ 6779.453233] Lustre: server umount lustre-MDT0000 complete [ 6784.216006] LDISKFS-fs (dm-0): recovery complete [ 6784.218660] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6784.344121] Lustre: lustre-MDT0000: Aborting client recovery [ 6784.346209] LustreError: 22725:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 6784.349312] Lustre: 22754:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 6784.350699] LustreError: 22752:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0000-osd: get update log duration 0, retries 0, failed: rc = -108 [ 6784.356995] Lustre: 22754:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 6784.360110] Lustre: 22754:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 6784.364070] Lustre: 22754:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 6784.367318] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 6784.370634] Lustre: lustre-MDT0000-osd: cancel update llog [0x20001a210:0x1:0x0] [ 6784.376001] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x2400007ec:0x1:0x0] [ 6784.400458] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:13676 to 0x2c0000401:14241) [ 6784.400461] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:13771 to 0x280000401:14313) [ 6785.395056] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6816.768828] Lustre: DEBUG MARKER: == replay-single test 102a: check resend (request lost) with multiple modify RPCs in flight ========================================================== 12:21:57 (1713370917) [ 6817.224904] Lustre: *** cfs_fail_loc=159, val=0*** [ 6833.225569] Lustre: lustre-MDT0001: Client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp) reconnecting [ 6833.230675] Lustre: Skipped 1 previous similar message [ 6837.876804] Lustre: DEBUG MARKER: == replay-single test 102b: check resend (reply lost) with multiple modify RPCs in flight ========================================================== 12:22:18 (1713370938) [ 6854.382506] Lustre: lustre-MDT0000: Client 4aa3185e-4241-4670-9bed-d2097befa086 (at 192.168.202.34@tcp) reconnecting [ 6858.958851] Lustre: DEBUG MARKER: == replay-single test 102c: check replay w/o reconstruction with multiple mod RPCs in flight ========================================================== 12:22:39 (1713370959) [ 6862.054076] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6864.500514] Lustre: Failing over lustre-MDT0000 [ 6864.573599] Lustre: server umount lustre-MDT0000 complete [ 6878.971720] LDISKFS-fs (dm-0): recovery complete [ 6878.974998] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6880.229388] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6884.101476] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14821 to 0x280000401:14857) [ 6884.101503] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14749 to 0x2c0000401:14785) [ 6884.978040] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6885.455059] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6890.673076] Lustre: DEBUG MARKER: == replay-single test 102d: check replay [ 6893.122969] Lustre: Failing over lustre-MDT0001 [ 6893.193809] Lustre: server umount lustre-MDT0001 complete [ 6905.343782] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 6906.259310] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6910.478749] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4769) [ 6910.478754] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4641) [ 6911.207319] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 6911.756969] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6916.853602] Lustre: DEBUG MARKER: == replay-single test 103: Check otr_next_id overflow ==== 12:23:37 (1713371017) [ 6917.942740] Lustre: Failing over lustre-MDT0000 [ 6918.000502] Lustre: server umount lustre-MDT0000 complete [ 6918.143002] LustreError: 6917:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713371019 with bad export cookie 8182669006188124448 [ 6929.927086] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6930.744801] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6935.038501] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:14889) [ 6935.038507] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:14817) [ 6935.580807] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6935.946760] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6940.167722] Lustre: DEBUG MARKER: == replay-single test 110a: DNE: create striped dir, fail MDT1 ========================================================== 12:24:00 (1713371040) [ 6942.099580] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6942.620806] Lustre: Failing over lustre-MDT0000 [ 6942.669790] Lustre: server umount lustre-MDT0000 complete [ 6955.672482] LDISKFS-fs (dm-0): recovery complete [ 6955.673547] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6956.482312] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6960.786274] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:14849) [ 6960.786296] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:14921) [ 6961.451559] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6961.863084] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 6966.715897] Lustre: DEBUG MARKER: == replay-single test 110b: DNE: create striped dir, fail MDT1 and client ========================================================== 12:24:27 (1713371067) [ 6968.993824] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 6969.613912] Lustre: Failing over lustre-MDT0000 [ 6969.674444] Lustre: server umount lustre-MDT0000 complete [ 6982.646579] LDISKFS-fs (dm-0): recovery complete [ 6982.647752] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 6983.415550] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 6985.580523] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 6986.862716] Lustre: lustre-MDT0000: Denying connection for new client 9f851f38-27b5-42c2-bf0d-8b26de8da310 (at 192.168.202.34@tcp), waiting for 2 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 6986.866480] Lustre: Skipped 6 previous similar messages [ 7051.964625] Lustre: lustre-MDT0000: Denying connection for new client 9f851f38-27b5-42c2-bf0d-8b26de8da310 (at 192.168.202.34@tcp), waiting for 2 known clients (0 recovered, 1 in progress, and 0 evicted) to recover in 0:04 [ 7051.968529] Lustre: Skipped 12 previous similar messages [ 7056.430413] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 7056.433834] Lustre: 4054:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 4aa3185e-4241-4670-9bed-d2097befa086@ [ 7056.440094] Lustre: 4054:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 7056.444926] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 7056.473398] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:14953) [ 7056.473417] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:14881) [ 7062.275698] Lustre: DEBUG MARKER: == replay-single test 110c: DNE: create striped dir, fail MDT2 ========================================================== 12:26:03 (1713371163) [ 7064.906808] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7065.603236] Lustre: Failing over lustre-MDT0001 [ 7065.680124] Lustre: server umount lustre-MDT0001 complete [ 7067.853834] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 7067.859975] LustreError: Skipped 7 previous similar messages [ 7079.787774] LDISKFS-fs (dm-1): recovery complete [ 7079.789378] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7080.678977] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7084.911883] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4673) [ 7084.911890] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4801) [ 7085.431996] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7085.821035] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7090.199616] Lustre: DEBUG MARKER: == replay-single test 110d: DNE: create striped dir, fail MDT2 and client ========================================================== 12:26:31 (1713371191) [ 7092.087771] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7092.645099] Lustre: Failing over lustre-MDT0001 [ 7092.697853] Lustre: server umount lustre-MDT0001 complete [ 7106.511786] LDISKFS-fs (dm-1): recovery complete [ 7106.514445] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7107.647822] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7110.109996] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7180.430409] Lustre: lustre-MDT0001: recovery is timed out, evict stale exports [ 7180.433825] Lustre: 9358:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0001: disconnect stale client 9f851f38-27b5-42c2-bf0d-8b26de8da310@ [ 7180.440420] Lustre: lustre-MDT0001: disconnecting 1 stale clients [ 7180.475462] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4833) [ 7180.475474] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4705) [ 7186.155134] Lustre: DEBUG MARKER: == replay-single test 110e: DNE: create striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 12:28:06 (1713371286) [ 7188.742829] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7191.698077] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7192.482095] Lustre: Failing over lustre-MDT0000 [ 7192.571600] Lustre: server umount lustre-MDT0000 complete [ 7193.961239] LustreError: 8072:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713371295 with bad export cookie 8182669006188130790 [ 7193.962694] Lustre: Failing over lustre-MDT0001 [ 7193.970159] LustreError: 8072:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 7194.146657] Lustre: server umount lustre-MDT0001 complete [ 7207.506066] LDISKFS-fs (dm-1): recovery complete [ 7207.509623] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7207.510068] LDISKFS-fs (dm-0): recovery complete [ 7207.510235] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7218.982573] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff880070e05c00 x1796592533707392/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 7219.901008] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7219.926374] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7222.176687] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7223.464834] Lustre: lustre-MDT0000: Denying connection for new client 8f5de759-4fc2-4111-a799-82147b33b31f (at 192.168.202.34@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 7223.470488] Lustre: Skipped 15 previous similar messages [ 7225.153009] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:14913) [ 7225.153054] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:14985) [ 7293.430244] Lustre: lustre-MDT0001: recovery is timed out, evict stale exports [ 7293.431689] Lustre: 13428:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0001: disconnect stale client 429e7419-4a9d-4cea-b5f6-df9a86805950@ [ 7293.434204] Lustre: lustre-MDT0001: disconnecting 1 stale clients [ 7293.456777] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4737) [ 7293.456832] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4865) [ 7295.644220] Lustre: DEBUG MARKER: SKIP: replay-single test_110f skipping excluded test 110f [ 7296.983298] Lustre: DEBUG MARKER: == replay-single test 110g: DNE: create striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 12:29:57 (1713371397) [ 7298.862288] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7300.855046] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7301.339788] Lustre: Failing over lustre-MDT0000 [ 7301.392586] Lustre: server umount lustre-MDT0000 complete [ 7302.409743] LustreError: 6919:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713371404 with bad export cookie 8182669006188135179 [ 7302.410928] Lustre: Failing over lustre-MDT0001 [ 7302.416110] LustreError: 6919:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 7307.281702] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 7307.283157] Lustre: Skipped 6 previous similar messages [ 7308.258042] Lustre: server umount lustre-MDT0001 complete [ 7321.457104] LDISKFS-fs (dm-0): recovery complete [ 7321.458204] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7321.501526] LDISKFS-fs (dm-1): recovery complete [ 7321.502636] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7328.285474] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff880096378000 x1796592533739520/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 7328.407759] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7328.413008] LustreError: Skipped 122 previous similar messages [ 7329.424118] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7329.455226] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7331.547399] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4897) [ 7331.551140] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4769) [ 7332.088651] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7401.430251] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 7401.431897] Lustre: 17976:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 8f5de759-4fc2-4111-a799-82147b33b31f@ [ 7401.434746] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 7401.438735] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to (at 0@lo) [ 7401.439271] Lustre: lustre-MDT0000: Recovery over after 1:10, of 2 clients 1 recovered and 1 was evicted. [ 7401.439273] Lustre: Skipped 11 previous similar messages [ 7401.442902] Lustre: Skipped 44 previous similar messages [ 7401.451972] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15017) [ 7401.451973] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:14945) [ 7408.872177] Lustre: DEBUG MARKER: == replay-single test 111a: DNE: unlink striped dir, fail MDT1 ========================================================== 12:31:49 (1713371509) [ 7410.969868] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7411.565724] Lustre: Failing over lustre-MDT0000 [ 7411.621656] Lustre: server umount lustre-MDT0000 complete [ 7411.629486] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 7411.633724] Lustre: Skipped 43 previous similar messages [ 7424.723484] LDISKFS-fs (dm-0): recovery complete [ 7424.725865] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7424.770601] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 7424.774005] LustreError: Skipped 7 previous similar messages [ 7424.845916] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 7424.847693] Lustre: Skipped 13 previous similar messages [ 7424.854703] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 7424.856183] Lustre: Skipped 15 previous similar messages [ 7425.557000] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7428.668742] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 7428.670749] Lustre: Skipped 11 previous similar messages [ 7429.879040] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15049) [ 7429.883106] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:14977) [ 7430.457962] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 7430.884175] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7435.707776] Lustre: DEBUG MARKER: == replay-single test 111b: DNE: unlink striped dir, fail MDT2 ========================================================== 12:32:16 (1713371536) [ 7437.841900] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7438.405677] Lustre: Failing over lustre-MDT0001 [ 7438.467758] Lustre: server umount lustre-MDT0001 complete [ 7451.619372] LDISKFS-fs (dm-1): recovery complete [ 7451.621792] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7452.616719] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7455.081341] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7481.500899] Lustre: lustre-MDT0001: Denying connection for new client 7f413900-f95d-487d-9c68-a5c467ae2d42 (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 0 evicted) to recover in 0:44 [ 7481.512411] Lustre: Skipped 32 previous similar messages [ 7526.430297] Lustre: lustre-MDT0001: recovery is timed out, evict stale exports [ 7526.452782] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4801) [ 7526.452827] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4929) [ 7530.296536] Lustre: DEBUG MARKER: == replay-single test 111c: DNE: unlink striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 12:33:51 (1713371631) [ 7532.727260] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7535.453070] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7536.077731] Lustre: Failing over lustre-MDT0000 [ 7536.122353] Lustre: server umount lustre-MDT0000 complete [ 7537.462216] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713371639 with bad export cookie 8182669006188139876 [ 7537.463333] Lustre: Failing over lustre-MDT0001 [ 7537.470321] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 5 previous similar messages [ 7543.307275] Lustre: server umount lustre-MDT0001 complete [ 7557.219478] LDISKFS-fs (dm-0): recovery complete [ 7557.221763] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7557.227127] LDISKFS-fs (dm-1): recovery complete [ 7557.229416] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7562.855597] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff88009db3d180 x1796592533815296/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 7563.770052] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7563.800118] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7566.319558] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7568.967892] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4961) [ 7568.967956] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4833) [ 7637.430260] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 7637.431698] Lustre: 27769:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 7f413900-f95d-487d-9c68-a5c467ae2d42@ [ 7637.434083] Lustre: 27769:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 7637.435834] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 7637.436997] Lustre: Skipped 1 previous similar message [ 7637.452023] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15009) [ 7637.452095] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15081) [ 7642.680275] Lustre: DEBUG MARKER: == replay-single test 111d: DNE: unlink striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 12:35:43 (1713371743) [ 7645.265456] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7647.882578] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7648.563472] Lustre: Failing over lustre-MDT0000 [ 7648.637697] Lustre: server umount lustre-MDT0000 complete [ 7649.930043] Lustre: Failing over lustre-MDT0001 [ 7650.046929] Lustre: server umount lustre-MDT0001 complete [ 7664.134087] LDISKFS-fs (dm-0): recovery complete [ 7664.134167] LDISKFS-fs (dm-1): recovery complete [ 7664.137776] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7664.137813] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7670.149314] Lustre: 3492:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713371755/real 1713371755] req@ffff880080851880 x1796592533842432/t0(0) o400->lustre-MDT0001-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713371771 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 7670.162066] Lustre: 3492:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 21 previous similar messages [ 7675.166681] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff8800749f1500 x1796592533843648/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 7675.395541] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_connect to node 0@lo failed: rc = -114 [ 7675.398053] LustreError: Skipped 6 previous similar messages [ 7676.224703] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7676.238478] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7678.971903] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7680.362124] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15113) [ 7680.362145] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15041) [ 7748.430362] Lustre: lustre-MDT0001: recovery is timed out, evict stale exports [ 7748.461250] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:4993) [ 7748.461252] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4865) [ 7754.848363] Lustre: DEBUG MARKER: == replay-single test 111e: DNE: unlink striped dir, uncommit on MDT2, fail MDT1/MDT2 ========================================================== 12:37:35 (1713371855) [ 7757.501460] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7760.200831] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7760.909648] Lustre: Failing over lustre-MDT0000 [ 7760.986277] Lustre: server umount lustre-MDT0000 complete [ 7762.130533] Lustre: Failing over lustre-MDT0001 [ 7762.248656] Lustre: server umount lustre-MDT0001 complete [ 7775.753615] LDISKFS-fs (dm-0): recovery complete [ 7775.754744] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7775.797766] LDISKFS-fs (dm-1): recovery complete [ 7775.798805] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7788.170990] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7788.192954] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7791.616034] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4897) [ 7791.616043] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:5025) [ 7791.619176] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15073) [ 7791.619662] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15145) [ 7792.220753] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7792.650720] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7793.062814] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7797.904026] Lustre: DEBUG MARKER: == replay-single test 111f: DNE: unlink striped dir, uncommit on MDT1, fail MDT1/MDT2 ========================================================== 12:38:18 (1713371898) [ 7800.348347] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7802.940061] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7803.651497] Lustre: Failing over lustre-MDT0000 [ 7803.728718] Lustre: server umount lustre-MDT0000 complete [ 7805.026229] LustreError: 6918:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713371906 with bad export cookie 8182669006188147240 [ 7805.027495] Lustre: Failing over lustre-MDT0001 [ 7805.034403] LustreError: 6918:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 6 previous similar messages [ 7805.151046] Lustre: server umount lustre-MDT0001 complete [ 7818.729428] LDISKFS-fs (dm-1): recovery complete [ 7818.731748] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7818.743528] LDISKFS-fs (dm-0): recovery complete [ 7818.746068] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7829.405648] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) @@@ invalidate in flight req@ffff880070eef800 x1796592533892800/t0(0) o250->MGC192.168.202.134@tcp@0@lo:26/25 lens 520/544 e 0 to 0 dl 0 ref 1 fl Rpc:NQU/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 7829.416594] LustreError: 3491:0:(client.c:1291:ptlrpc_import_delay_req()) Skipped 1 previous similar message [ 7830.445122] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7830.492809] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7835.540868] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:5057) [ 7835.541568] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4929) [ 7835.560709] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15177) [ 7835.560716] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15105) [ 7836.462418] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7837.043159] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7837.582718] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7843.456158] Lustre: DEBUG MARKER: == replay-single test 111g: DNE: unlink striped dir, fail MDT1/MDT2 ========================================================== 12:39:04 (1713371944) [ 7845.793327] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7848.177105] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7848.951334] Lustre: Failing over lustre-MDT0000 [ 7849.034644] Lustre: server umount lustre-MDT0000 complete [ 7850.447336] Lustre: Failing over lustre-MDT0001 [ 7850.568207] Lustre: server umount lustre-MDT0001 complete [ 7864.793055] LDISKFS-fs (dm-0): recovery complete [ 7864.794917] LDISKFS-fs (dm-1): recovery complete [ 7864.795201] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7864.805721] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7875.622556] Lustre: Evicted from MGS (at 192.168.202.134@tcp) after server handle changed from 0x0 to 0x718eae1b84718450 [ 7876.633755] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7876.666744] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7881.766805] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4961) [ 7881.766808] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:5089) [ 7881.782623] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15209) [ 7881.782644] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15137) [ 7882.463902] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid,mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7883.001639] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7883.496045] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7889.100267] Lustre: DEBUG MARKER: == replay-single test 112a: DNE: cross MDT rename, fail MDT1 ========================================================== 12:39:49 (1713371989) [ 7889.609585] Lustre: DEBUG MARKER: SKIP: replay-single test_112a needs >= 4 MDTs [ 7892.335499] Lustre: DEBUG MARKER: == replay-single test 112b: DNE: cross MDT rename, fail MDT2 ========================================================== 12:39:52 (1713371992) [ 7892.896522] Lustre: DEBUG MARKER: SKIP: replay-single test_112b needs >= 4 MDTs [ 7895.614957] Lustre: DEBUG MARKER: == replay-single test 112c: DNE: cross MDT rename, fail MDT3 ========================================================== 12:39:56 (1713371996) [ 7896.182961] Lustre: DEBUG MARKER: SKIP: replay-single test_112c needs >= 4 MDTs [ 7899.017632] Lustre: DEBUG MARKER: == replay-single test 112d: DNE: cross MDT rename, fail MDT4 ========================================================== 12:39:59 (1713371999) [ 7899.577042] Lustre: DEBUG MARKER: SKIP: replay-single test_112d needs >= 4 MDTs [ 7902.405810] Lustre: DEBUG MARKER: == replay-single test 112e: DNE: cross MDT rename, fail MDT1 and MDT2 ========================================================== 12:40:03 (1713372003) [ 7902.966557] Lustre: DEBUG MARKER: SKIP: replay-single test_112e needs >= 4 MDTs [ 7905.768066] Lustre: DEBUG MARKER: == replay-single test 112f: DNE: cross MDT rename, fail MDT1 and MDT3 ========================================================== 12:40:06 (1713372006) [ 7906.320209] Lustre: DEBUG MARKER: SKIP: replay-single test_112f needs >= 4 MDTs [ 7909.096540] Lustre: DEBUG MARKER: == replay-single test 112g: DNE: cross MDT rename, fail MDT1 and MDT4 ========================================================== 12:40:09 (1713372009) [ 7909.636613] Lustre: DEBUG MARKER: SKIP: replay-single test_112g needs >= 4 MDTs [ 7912.337768] Lustre: DEBUG MARKER: == replay-single test 112h: DNE: cross MDT rename, fail MDT2 and MDT3 ========================================================== 12:40:12 (1713372012) [ 7912.872056] Lustre: DEBUG MARKER: SKIP: replay-single test_112h needs >= 4 MDTs [ 7915.568866] Lustre: DEBUG MARKER: == replay-single test 112i: DNE: cross MDT rename, fail MDT2 and MDT4 ========================================================== 12:40:16 (1713372016) [ 7916.104694] Lustre: DEBUG MARKER: SKIP: replay-single test_112i needs >= 4 MDTs [ 7918.833276] Lustre: DEBUG MARKER: == replay-single test 112j: DNE: cross MDT rename, fail MDT3 and MDT4 ========================================================== 12:40:19 (1713372019) [ 7919.357344] Lustre: DEBUG MARKER: SKIP: replay-single test_112j needs >= 4 MDTs [ 7922.122932] Lustre: DEBUG MARKER: == replay-single test 112k: DNE: cross MDT rename, fail MDT1,MDT2,MDT3 ========================================================== 12:40:22 (1713372022) [ 7922.684921] Lustre: DEBUG MARKER: SKIP: replay-single test_112k needs >= 4 MDTs [ 7925.469310] Lustre: DEBUG MARKER: == replay-single test 112l: DNE: cross MDT rename, fail MDT1,MDT2,MDT4 ========================================================== 12:40:26 (1713372026) [ 7926.018777] Lustre: DEBUG MARKER: SKIP: replay-single test_112l needs >= 4 MDTs [ 7928.768600] Lustre: DEBUG MARKER: == replay-single test 112m: DNE: cross MDT rename, fail MDT1,MDT3,MDT4 ========================================================== 12:40:29 (1713372029) [ 7929.340066] Lustre: DEBUG MARKER: SKIP: replay-single test_112m needs >= 4 MDTs [ 7931.977080] Lustre: DEBUG MARKER: == replay-single test 112n: DNE: cross MDT rename, fail MDT2,MDT3,MDT4 ========================================================== 12:40:32 (1713372032) [ 7932.355373] Lustre: DEBUG MARKER: SKIP: replay-single test_112n needs >= 4 MDTs [ 7934.684564] Lustre: DEBUG MARKER: == replay-single test 115: failover for create/unlink striped directory ========================================================== 12:40:35 (1713372035) [ 7937.215655] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 7938.232091] Lustre: Failing over lustre-MDT0001 [ 7938.317356] Lustre: server umount lustre-MDT0001 complete [ 7940.909849] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 7940.913668] LustreError: Skipped 88 previous similar messages [ 7952.428023] LDISKFS-fs (dm-1): recovery complete [ 7952.429457] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 7953.533470] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7957.553136] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:4993) [ 7957.553289] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:5121) [ 7958.339498] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 7958.880322] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7963.170677] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7964.187761] Lustre: Failing over lustre-MDT0000 [ 7964.255416] Lustre: server umount lustre-MDT0000 complete [ 7978.461547] LDISKFS-fs (dm-0): recovery complete [ 7978.463939] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 7979.693830] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 7983.642622] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15169) [ 7983.642713] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15241) [ 7984.447081] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 7985.012399] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 7991.022160] Lustre: DEBUG MARKER: == replay-single test 116a: large update log master MDT recovery ========================================================== 12:41:31 (1713372091) [ 7993.800423] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 7994.169047] Lustre: *** cfs_fail_loc=1702, val=0*** [ 7995.077248] Lustre: Failing over lustre-MDT0000 [ 7995.153393] Lustre: server umount lustre-MDT0000 complete [ 8008.277321] LDISKFS-fs (dm-0): recovery complete [ 8008.278486] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8009.017868] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8013.358370] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to (at 0@lo) [ 8013.361026] Lustre: Skipped 45 previous similar messages [ 8013.377089] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 8013.379705] Lustre: Skipped 14 previous similar messages [ 8013.394753] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15201) [ 8013.394755] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15273) [ 8013.860080] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8014.185146] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8018.227754] Lustre: DEBUG MARKER: == replay-single test 116b: large update log slave MDT recovery ========================================================== 12:41:59 (1713372119) [ 8019.963888] Lustre: DEBUG MARKER: mds2 REPLAY BARRIER on lustre-MDT0001 [ 8020.189871] Lustre: *** cfs_fail_loc=1702, val=0*** [ 8020.789973] Lustre: Failing over lustre-MDT0001 [ 8020.847771] Lustre: server umount lustre-MDT0001 complete [ 8023.373570] Lustre: lustre-MDT0001-osp-MDT0000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 8023.376524] Lustre: Skipped 46 previous similar messages [ 8033.623029] LDISKFS-fs (dm-1): recovery complete [ 8033.625231] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 8033.703130] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 8033.705338] Lustre: Skipped 14 previous similar messages [ 8033.710571] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 8033.712270] Lustre: Skipped 14 previous similar messages [ 8033.964317] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 8033.966500] Lustre: Skipped 14 previous similar messages [ 8034.371218] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8038.733201] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000400:4617 to 0x2c0000400:5025) [ 8038.733207] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:4745 to 0x280000400:5153) [ 8039.220345] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid [ 8039.541149] Lustre: DEBUG MARKER: mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8044.213729] Lustre: DEBUG MARKER: == replay-single test 117: DNE: cross MDT unlink, fail MDT1 and MDT2 ========================================================== 12:42:24 (1713372144) [ 8044.747987] Lustre: DEBUG MARKER: SKIP: replay-single test_117 needs >= 4 MDTs [ 8047.493653] Lustre: DEBUG MARKER: == replay-single test 118: invalidate osp update will not cause update log corruption ========================================================== 12:42:28 (1713372148) [ 8047.989159] Lustre: *** cfs_fail_loc=1705, val=0*** [ 8048.631559] LustreError: 28481:0:(llog_cat.c:737:llog_cat_cancel_arr_rec()) lustre-MDT0001-osp-MDT0000: fail to cancel 1 llog-records: rc = -116 [ 8048.637455] LustreError: 28481:0:(llog_cat.c:773:llog_cat_cancel_records()) lustre-MDT0001-osp-MDT0000: fail to cancel 1 of 1 llog-records: rc = -116 [ 8050.936662] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8051.661588] Lustre: Failing over lustre-MDT0000 [ 8051.738344] Lustre: server umount lustre-MDT0000 complete [ 8065.846935] LDISKFS-fs (dm-0): recovery complete [ 8065.849608] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8065.897641] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 8065.901000] LustreError: Skipped 7 previous similar messages [ 8067.042148] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8071.024177] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15233) [ 8071.024180] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15305) [ 8071.857060] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8072.430996] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8078.233426] Lustre: DEBUG MARKER: == replay-single test 119: timeout of normal replay does not cause DNE replay fails ========================================================== 12:42:58 (1713372178) [ 8081.237663] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8082.262659] Lustre: Failing over lustre-MDT0000 [ 8082.336351] Lustre: server umount lustre-MDT0000 complete [ 8087.472244] LDISKFS-fs (dm-0): recovery complete [ 8087.474928] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8088.670477] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8089.052880] Lustre: 13333:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 0 [ 8090.179099] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8092.589742] Lustre: 5743:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 0 [ 8092.595962] LustreError: 4980:0:(ldlm_lib.c:2617:replay_request_or_update()) cfs_fail_timeout id 714 sleeping for 65000ms [ 8157.600317] LustreError: 4980:0:(ldlm_lib.c:2617:replay_request_or_update()) cfs_fail_timeout id 714 awake [ 8157.604699] Lustre: 4980:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client f5f6b8bf-e5fe-4343-8784-848c0b65208d@192.168.202.34@tcp [ 8157.611747] Lustre: 4980:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 8157.616510] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 8157.619439] Lustre: Skipped 1 previous similar message [ 8157.622247] Lustre: 4980:0:(ldlm_lib.c:1824:abort_req_replay_queue()) @@@ aborted: req@ffff88012b66d180 x1796592558595904/t0(528280977412) o36->f5f6b8bf-e5fe-4343-8784-848c0b65208d@192.168.202.34@tcp:186/0 lens 528/0 e 7 to 0 dl 1713372271 ref 1 fl Complete:/204/ffffffff rc 0/-1 job:'mcreate.0' uid:0 gid:0 [ 8157.634280] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 8157.636133] Lustre: lustre-MDT0000: Denying connection for new client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp), waiting for 2 known clients (1 recovered, 0 in progress, and 1 evicted) already passed deadline 0:09 [ 8157.636135] Lustre: Skipped 37 previous similar messages [ 8157.649864] Lustre: 4980:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 60, extend: 1 [ 8157.682778] Lustre: 4980:0:(ldlm_lib.c:2300:target_recovery_overseer()) lustre-MDT0000 recovery is aborted by hard timeout [ 8157.686829] Lustre: 4980:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 8157.690677] Lustre: 4980:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 8157.697053] Lustre: lustre-MDT0000-osd: cancel update llog [0x200034fc0:0x1:0x0] [ 8157.703025] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x24000dad9:0x1:0x0] [ 8157.711260] Lustre: 4980:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 8157.715127] LustreError: dumping log to /tmp/lustre-log.1713372259.4980 [ 8157.792875] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15337) [ 8157.792877] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15265) [ 8160.310233] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 68 sec [ 8166.944324] Lustre: DEBUG MARKER: == replay-single test 120: DNE fail abort should stop both normal and DNE replay ========================================================== 12:44:27 (1713372267) [ 8169.239853] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8171.555701] Lustre: Failing over lustre-MDT0000 [ 8172.718294] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 8172.721891] Lustre: Skipped 7 previous similar messages [ 8177.213313] Lustre: server umount lustre-MDT0000 complete [ 8181.717831] LDISKFS-fs (dm-0): recovery complete [ 8181.720162] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8181.839712] Lustre: lustre-MDT0000: Aborting client recovery [ 8181.841645] LustreError: 7728:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 8181.844506] Lustre: 7757:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 8181.848857] Lustre: 7757:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 8181.854510] Lustre: lustre-MDT0000-osd: cancel update llog [0x20003ccc0:0x3:0x0] [ 8181.862301] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240014069:0x1:0x0] [ 8181.889036] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:14873 to 0x280000401:15369) [ 8181.889046] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15297) [ 8183.001474] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8186.848738] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 8197.048341] Lustre: DEBUG MARKER: == replay-single test 121: lock replay timed out and race ========================================================== 12:44:57 (1713372297) [ 8198.095734] Lustre: Failing over lustre-MDT0000 [ 8198.161990] Lustre: server umount lustre-MDT0000 complete [ 8201.869681] Lustre: *** cfs_fail_loc=721, val=0*** [ 8201.872033] Lustre: Skipped 17 previous similar messages [ 8202.081593] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8203.294900] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8203.960569] Lustre: *** cfs_fail_loc=721, val=0*** [ 8203.963088] Lustre: Skipped 32 previous similar messages [ 8205.510796] Lustre: *** cfs_fail_loc=721, val=1*** [ 8205.512974] Lustre: Skipped 44 previous similar messages [ 8207.218996] Lustre: *** cfs_fail_loc=721, val=1*** [ 8207.221295] Lustre: Skipped 22 previous similar messages [ 8209.005683] Lustre: *** cfs_fail_loc=721, val=1*** [ 8209.007954] Lustre: Skipped 24 previous similar messages [ 8214.029498] Lustre: *** cfs_fail_loc=721, val=1*** [ 8214.031844] Lustre: Skipped 19 previous similar messages [ 8220.274971] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:53 [ 8222.237662] Lustre: *** cfs_fail_loc=721, val=1*** [ 8222.240292] Lustre: Skipped 36 previous similar messages [ 8236.288702] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:37 [ 8237.218720] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 8239.069664] Lustre: *** cfs_fail_loc=721, val=1*** [ 8239.072037] Lustre: Skipped 63 previous similar messages [ 8252.301703] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:21 [ 8267.223787] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 8268.314805] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:19 [ 8272.317564] Lustre: *** cfs_fail_loc=721, val=1*** [ 8272.319854] Lustre: Skipped 133 previous similar messages [ 8284.328243] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:03 [ 8297.228347] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713372368/real 1713372368] req@ffff88012f97e680 x1796592534101120/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 0 to 1 dl 1713372398 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ptlrpcd_rcv.0' uid:0 gid:0 [ 8297.241625] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 43 previous similar messages [ 8297.246336] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 8297.251411] Lustre: *** cfs_fail_loc=721, val=1*** [ 8297.253837] Lustre: Skipped 2 previous similar messages [ 8297.256466] Lustre: lustre-MDT0000: recovery is timed out, evict stale exports [ 8316.348819] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:11 [ 8316.355146] Lustre: Skipped 1 previous similar message [ 8327.251798] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 8336.492907] Lustre: *** cfs_fail_loc=721, val=1*** [ 8336.495301] Lustre: Skipped 250 previous similar messages [ 8348.372273] Lustre: lustre-MDT0000: Recovery already passed deadline 0:00. If you do not want to wait more, you may force taget eviction via 'lctl --device lustre-MDT0000 abort_recovery. [ 8357.256814] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 8357.261901] Lustre: 9951:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 180, extend: 1 [ 8357.267954] Lustre: 9951:0:(ldlm_lib.c:1992:extend_recovery_timer()) Skipped 25 previous similar messages [ 8380.385959] Lustre: lustre-MDT0000: Client f5f6b8bf-e5fe-4343-8784-848c0b65208d (at 192.168.202.34@tcp) reconnected, waiting for 2 clients in recovery for 0:03 [ 8380.392219] Lustre: Skipped 2 previous similar messages [ 8387.261708] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 8387.266621] Lustre: 9951:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 180, extend: 1 [ 8387.272677] Lustre: 9951:0:(ldlm_lib.c:2300:target_recovery_overseer()) lustre-MDT0000 recovery is aborted by hard timeout [ 8387.277446] Lustre: 9951:0:(ldlm_lib.c:2300:target_recovery_overseer()) Skipped 1 previous similar message [ 8387.281927] Lustre: 9951:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 8387.286710] Lustre: 9951:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 8387.291447] LustreError: 9951:0:(ldlm_lib.c:1844:abort_lock_replay_queue()) @@@ aborted: req@ffff88009ea61500 x1796592558671360/t0(0) o101->f5f6b8bf-e5fe-4343-8784-848c0b65208d@192.168.202.34@tcp:0/0 lens 328/0 e 0 to 0 dl 1713372364 ref 1 fl Complete:/240/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 8387.305395] Lustre: lustre-MDT0000-osd: cancel update llog [0x20003d490:0x1:0x0] [ 8387.313158] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x24001406a:0x1:0x0] [ 8387.324130] Lustre: 9951:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 8387.328616] LustreError: dumping log to /tmp/lustre-log.1713372489.9951 [ 8387.377668] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15329) [ 8387.377831] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15371 to 0x280000401:15401) [ 8399.067638] Lustre: DEBUG MARKER: == replay-single test 130a: DoM file create (setstripe) replay ========================================================== 12:48:19 (1713372499) [ 8401.830899] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8402.562860] Lustre: Failing over lustre-MDT0000 [ 8402.631634] Lustre: server umount lustre-MDT0000 complete [ 8403.293835] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 8403.298377] LustreError: Skipped 5 previous similar messages [ 8416.834836] LDISKFS-fs (dm-0): recovery complete [ 8416.836386] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8418.066257] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8422.025342] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15361) [ 8422.025369] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15371 to 0x280000401:15433) [ 8422.841802] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8423.397763] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8429.253064] Lustre: DEBUG MARKER: == replay-single test 130b: DoM file create (inherited) replay ========================================================== 12:48:49 (1713372529) [ 8432.018043] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8432.744078] Lustre: Failing over lustre-MDT0000 [ 8432.822845] Lustre: server umount lustre-MDT0000 complete [ 8447.009259] LDISKFS-fs (dm-0): recovery complete [ 8447.011786] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8448.199554] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8452.164372] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15371 to 0x280000401:15465) [ 8452.164381] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15393) [ 8452.955907] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8453.513727] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8459.322346] Lustre: DEBUG MARKER: == replay-single test 131a: DoM file write lock replay === 12:49:19 (1713372559) [ 8462.044929] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8462.770639] Lustre: Failing over lustre-MDT0000 [ 8462.838648] Lustre: server umount lustre-MDT0000 complete [ 8476.983553] LDISKFS-fs (dm-0): recovery complete [ 8476.985977] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8478.161531] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8482.131611] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:14801 to 0x2c0000401:15425) [ 8482.131665] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15371 to 0x280000401:15497) [ 8482.938688] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8483.497648] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8487.884066] Lustre: DEBUG MARKER: SKIP: replay-single test_131b skipping excluded test 131b [ 8489.902601] Lustre: DEBUG MARKER: == replay-single test 132a: PFL new component instantiate replay ========================================================== 12:49:50 (1713372590) [ 8492.643402] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8493.475647] Lustre: Failing over lustre-MDT0000 [ 8493.547394] Lustre: server umount lustre-MDT0000 complete [ 8507.681084] LDISKFS-fs (dm-0): recovery complete [ 8507.683743] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8508.857444] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8512.850207] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:15428 to 0x2c0000401:15457) [ 8512.850216] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15499 to 0x280000401:15529) [ 8513.648127] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8514.206590] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8520.005744] Lustre: DEBUG MARKER: == replay-single test 133: check resend of ongoing requests for lwp during failover ========================================================== 12:50:20 (1713372620) [ 8522.237411] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 8522.239993] Lustre: Skipped 263 previous similar messages [ 8523.919624] Lustre: Failing over lustre-MDT0000 [ 8523.988039] Lustre: server umount lustre-MDT0000 complete [ 8536.793353] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8538.003982] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8538.256619] Lustre: lustre-MDT0001: Client 7360484b-6ddb-43fb-a3e3-d48758f5ddf5 (at 192.168.202.34@tcp) reconnecting [ 8541.921153] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000300000400-0x0000000340000400]:1:mdt [ 8541.926735] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000300000400-0x0000000340000400]:1:mdt] [ 8541.946431] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15499 to 0x280000401:15561) [ 8541.946999] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:15428 to 0x2c0000401:15489) [ 8546.491398] Lustre: DEBUG MARKER: == replay-single test 134: replay creation of a file created in a pool ========================================================== 12:50:47 (1713372647) [ 8555.445699] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 8556.190788] Lustre: Failing over lustre-MDT0000 [ 8556.260515] Lustre: server umount lustre-MDT0000 complete [ 8556.942524] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 8556.949973] LustreError: Skipped 142 previous similar messages [ 8570.462606] LDISKFS-fs (dm-0): recovery complete [ 8570.465210] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8571.702256] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8575.647466] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15499 to 0x280000401:15593) [ 8575.647485] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:15428 to 0x2c0000401:15521) [ 8576.456179] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8577.017708] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8588.250690] Lustre: DEBUG MARKER: == replay-single test 135: Server failure in lock replay phase ========================================================== 12:51:28 (1713372688) [ 8592.287212] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 8593.055355] Lustre: Failing over lustre-OST0000 [ 8593.081237] Lustre: server umount lustre-OST0000 complete [ 8594.965857] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing load_module ../libcfs/libcfs/libcfs [ 8598.634177] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 8598.730610] LDISKFS-fs (dm-2): recovery complete [ 8598.732663] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 8600.513576] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8600.688299] Lustre: *** cfs_fail_loc=32d, val=20*** [ 8600.690629] Lustre: Skipped 3 previous similar messages [ 8602.041726] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount REPLAY_LOCKS osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 8602.602741] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in REPLAY_LOCKS state after 0 sec [ 8603.284336] Lustre: Failing over lustre-OST0000 [ 8603.289798] LustreError: 29908:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 8603.294560] Lustre: 29148:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 8603.299626] LustreError: 29148:0:(ofd_obd.c:1315:ofd_iocontrol()) lustre-OST0000: iocontrol from 'tgt_recover_0' cmd=c00866c1 _IOWR('f', 193, 8) unrecognized: rc = -25 [ 8603.307599] Lustre: 29148:0:(ofd_obd.c:557:ofd_postrecov()) lustre-OST0000: auto trigger paused LFSCK failed: rc = -6 [ 8603.348864] Lustre: server umount lustre-OST0000 complete [ 8615.245696] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing load_module ../libcfs/libcfs/libcfs [ 8617.573771] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 8617.579805] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 8619.381655] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8619.632217] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.202.134@tcp (at 0@lo) [ 8619.632242] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 8619.632244] Lustre: Skipped 11 previous similar messages [ 8619.646964] Lustre: Skipped 46 previous similar messages [ 8626.768825] Lustre: server umount lustre-OST0000 complete [ 8629.709795] Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 8629.716820] Lustre: Skipped 44 previous similar messages [ 8634.197711] Lustre: server umount lustre-OST0001 complete [ 8637.273230] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 8637.279162] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 8637.394468] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 8637.398350] Lustre: Skipped 12 previous similar messages [ 8639.142006] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8639.399463] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 8642.244715] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 8642.250800] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 8643.491697] LustreError: 167-0: lustre-OST0001-osc-MDT0000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. [ 8643.498085] LustreError: Skipped 1 previous similar message [ 8644.075901] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8667.138482] Lustre: DEBUG MARKER: == replay-single test 136: MDS to disconnect all OSPs first, then cleanup ldlm ========================================================== 12:52:47 (1713372767) [ 8667.675736] Lustre: DEBUG MARKER: SKIP: replay-single test_136 needs > 2 MDTs [ 8670.490374] Lustre: DEBUG MARKER: == replay-single test 200: Dropping one OBD_PING should not cause disconnect ========================================================== 12:52:51 (1713372771) [ 8671.032859] Lustre: DEBUG MARKER: SKIP: replay-single test_200 Need remote client [ 8672.437996] Lustre: DEBUG MARKER: == replay-single test complete, duration 8574 sec ======== 12:52:53 (1713372773) [ 8676.890759] Lustre: Failing over lustre-MDT0000 [ 8676.989340] Lustre: server umount lustre-MDT0000 complete [ 8692.344960] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 8692.398895] LustreError: 166-1: MGC192.168.202.134@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 8692.402645] LustreError: Skipped 9 previous similar messages [ 8692.486501] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 8692.490717] Lustre: Skipped 14 previous similar messages [ 8693.511835] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 8696.317012] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 8696.321590] Lustre: Skipped 11 previous similar messages [ 8697.533405] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:15428 to 0x2c0000401:15553) [ 8697.533408] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15615 to 0x280000401:15657) [ 8698.339379] Lustre: DEBUG MARKER: oleg234-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 8698.891700] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 8707.142869] Lustre: server umount lustre-MDT0000 complete [ 8710.053046] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713372811 with bad export cookie 8182669006188214748 [ 8710.063240] LustreError: 10469:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 5 previous similar messages [ 8710.224565] Lustre: server umount lustre-MDT0001 complete [ 8723.198177] Lustre: server umount lustre-OST0000 complete [ 8736.130112] Lustre: server umount lustre-OST0001 complete [ 8738.387269] device-mapper: core: cleaned up [ 8741.467714] Lustre: DEBUG MARKER: oleg234-server.virtnet: executing unload_modules_local [ 8742.234710] Key type lgssc unregistered [ 8742.320783] LNet: 7369:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 8742.325287] LNet: Removed LNI 192.168.202.134@tcp