[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 3.0.0 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f53f0-0x000f53ff] mapped at [ffffffffff2003f0] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5200 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1d87 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1c23 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01BE3 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1c97 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1d27 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1d5f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 371014179 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.453037] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.455651] pid_max: default: 32768 minimum: 301 [ 0.457177] Security Framework initialized [ 0.458432] SELinux: Initializing. [ 0.460962] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.465005] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.467713] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.469787] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.472258] Initializing cgroup subsys memory [ 0.473664] Initializing cgroup subsys devices [ 0.474973] Initializing cgroup subsys freezer [ 0.476295] Initializing cgroup subsys net_cls [ 0.477666] Initializing cgroup subsys blkio [ 0.479128] Initializing cgroup subsys perf_event [ 0.480604] Initializing cgroup subsys hugetlb [ 0.481983] Initializing cgroup subsys pids [ 0.483191] Initializing cgroup subsys net_prio [ 0.484852] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.487799] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.489450] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.491223] tlb_flushall_shift: 6 [ 0.492399] FEATURE SPEC_CTRL Present [ 0.493734] FEATURE IBPB_SUPPORT Present [ 0.494985] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.497028] Spectre V2 : Vulnerable [ 0.498210] Speculative Store Bypass: Vulnerable [ 0.500622] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.507563] ACPI: Core revision 20130517 [ 0.510360] ACPI: All ACPI Tables successfully acquired [ 0.512003] ftrace: allocating 30294 entries in 119 pages [ 0.557640] Enabling x2apic [ 0.558254] Enabled x2apic [ 0.559096] Switched APIC routing to physical x2apic. [ 0.561151] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.562242] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.564532] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.566118] ... version: 2 [ 0.566805] ... bit width: 48 [ 0.567428] ... generic registers: 4 [ 0.568128] ... value mask: 0000ffffffffffff [ 0.569085] ... max period: 00007fffffffffff [ 0.569980] ... fixed-purpose events: 3 [ 0.570658] ... event mask: 000000070000000f [ 0.571729] KVM setup paravirtual spinlock [ 0.574525] smpboot: Booting Node 0, Processors #1[ 0.575630] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.578574] KVM setup async PF for cpu 1 [ 0.579587] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.581602] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.584830] KVM setup async PF for cpu 2 [ 0.585344] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.588098] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.590059] Brought up 4 CPUs [ 0.590081] KVM setup async PF for cpu 3 [ 0.590092] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.592196] smpboot: Max logical packages: 1 [ 0.592942] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.595819] devtmpfs: initialized [ 0.596533] x86/mm: Memory block size: 128MB [ 0.599807] EVM: security.selinux [ 0.600465] EVM: security.ima [ 0.600987] EVM: security.capability [ 0.603153] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.604786] NET: Registered protocol family 16 [ 0.605776] cpuidle: using governor haltpoll [ 0.606715] ACPI: bus type PCI registered [ 0.607451] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.608895] PCI: Using configuration type 1 for base access [ 0.610031] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.616305] ACPI: Added _OSI(Module Device) [ 0.617535] ACPI: Added _OSI(Processor Device) [ 0.618831] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.620119] ACPI: Added _OSI(Processor Aggregator Device) [ 0.621645] ACPI: Added _OSI(Linux-Dell-Video) [ 0.627160] ACPI: Interpreter enabled [ 0.628295] ACPI: (supports S0 S3 S4 S5) [ 0.629532] ACPI: Using IOAPIC for interrupt routing [ 0.630982] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.633839] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.640978] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.642928] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.644789] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.646635] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.650483] acpiphp: Slot [2] registered [ 0.651701] acpiphp: Slot [5] registered [ 0.653011] acpiphp: Slot [6] registered [ 0.654245] acpiphp: Slot [7] registered [ 0.655443] acpiphp: Slot [8] registered [ 0.656663] acpiphp: Slot [9] registered [ 0.657834] acpiphp: Slot [10] registered [ 0.659009] acpiphp: Slot [3] registered [ 0.660227] acpiphp: Slot [4] registered [ 0.661478] acpiphp: Slot [11] registered [ 0.662691] acpiphp: Slot [12] registered [ 0.663993] acpiphp: Slot [13] registered [ 0.665225] acpiphp: Slot [14] registered [ 0.666579] acpiphp: Slot [15] registered [ 0.667921] acpiphp: Slot [16] registered [ 0.669142] acpiphp: Slot [17] registered [ 0.670376] acpiphp: Slot [18] registered [ 0.671728] acpiphp: Slot [19] registered [ 0.672992] acpiphp: Slot [20] registered [ 0.674441] acpiphp: Slot [21] registered [ 0.675783] acpiphp: Slot [22] registered [ 0.677335] acpiphp: Slot [23] registered [ 0.678685] acpiphp: Slot [24] registered [ 0.680000] acpiphp: Slot [25] registered [ 0.681340] acpiphp: Slot [26] registered [ 0.682657] acpiphp: Slot [27] registered [ 0.683947] acpiphp: Slot [28] registered [ 0.685249] acpiphp: Slot [29] registered [ 0.686506] acpiphp: Slot [30] registered [ 0.687781] acpiphp: Slot [31] registered [ 0.689038] PCI host bridge to bus 0000:00 [ 0.690299] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.692369] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.694405] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.697137] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.699355] pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38007fffffff window] [ 0.701775] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.718658] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.720941] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.723210] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.725415] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.728465] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.730947] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.951906] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.953366] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.954823] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.957428] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.959439] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.961543] vgaarb: loaded [ 0.962516] SCSI subsystem initialized [ 0.963536] ACPI: bus type USB registered [ 0.964567] usbcore: registered new interface driver usbfs [ 0.966198] usbcore: registered new interface driver hub [ 0.968539] usbcore: registered new device driver usb [ 0.972428] PCI: Using ACPI for IRQ routing [ 0.975549] NetLabel: Initializing [ 0.977010] NetLabel: domain hash size = 128 [ 0.979227] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.981546] NetLabel: unlabeled traffic allowed by default [ 0.983941] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.985602] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.992170] amd_nb: Cannot enumerate AMD northbridges [ 0.993727] Switched to clocksource kvm-clock [ 1.011537] pnp: PnP ACPI init [ 1.012508] ACPI: bus type PNP registered [ 1.014798] pnp: PnP ACPI: found 6 devices [ 1.016107] ACPI: bus type PNP unregistered [ 1.028966] NET: Registered protocol family 2 [ 1.031130] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.034126] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.037223] TCP: Hash tables configured (established 32768 bind 32768) [ 1.038841] TCP: reno registered [ 1.039542] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.040780] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.042408] NET: Registered protocol family 1 [ 1.044113] RPC: Registered named UNIX socket transport module. [ 1.045967] RPC: Registered udp transport module. [ 1.047044] RPC: Registered tcp transport module. [ 1.048364] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.050536] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.052210] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.053454] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.054888] Unpacking initramfs... [ 2.399827] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 2.403765] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.405967] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 2.410147] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.412582] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.414108] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.416927] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.422561] cryptomgr_test (52) used greatest stack depth: 14480 bytes left [ 2.422894] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.422936] Initialise system trusted keyring [ 2.452807] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.454170] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.459550] zpool: loaded [ 2.460414] zbud: loaded [ 2.461380] VFS: Disk quotas dquot_6.6.0 [ 2.462620] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.464796] NFS: Registering the id_resolver key type [ 2.465847] Key type id_resolver registered [ 2.466740] Key type id_legacy registered [ 2.467468] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.470149] Key type big_key registered [ 2.473110] cryptomgr_test (58) used greatest stack depth: 13968 bytes left [ 2.476116] cryptomgr_test (61) used greatest stack depth: 13664 bytes left [ 2.476831] NET: Registered protocol family 38 [ 2.476845] Key type asymmetric registered [ 2.476849] Asymmetric key parser 'x509' registered [ 2.476977] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.477081] io scheduler noop registered [ 2.477086] io scheduler deadline registered (default) [ 2.477158] io scheduler cfq registered [ 2.477164] io scheduler mq-deadline registered [ 2.477169] io scheduler kyber registered [ 2.478855] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.478867] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.508802] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.513696] ACPI: Power Button [PWRF] [ 2.516940] GHES: HEST is not enabled! [ 2.581631] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.646008] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.766992] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 2.820601] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 2.943014] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 2.974180] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 3.004106] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 3.008017] Non-volatile memory driver v1.3 [ 3.009865] Linux agpgart interface v0.103 [ 3.011543] crash memory driver: version 1.1 [ 3.013769] nbd: registered device at major 43 [ 3.029095] virtio_blk virtio1: [vda] 67344 512-byte logical blocks (34.4 MB/32.8 MiB) [ 3.041670] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 3.056069] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 3.071529] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 3.084975] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.098646] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.105583] rdac: device handler registered [ 3.107425] hp_sw: device handler registered [ 3.108935] emc: device handler registered [ 3.110528] libphy: Fixed MDIO Bus: probed [ 3.119709] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.121671] ehci-pci: EHCI PCI platform driver [ 3.122782] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.124244] ohci-pci: OHCI PCI platform driver [ 3.125682] uhci_hcd: USB Universal Host Controller Interface driver [ 3.127996] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 3.131823] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 3.133461] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 3.135873] mousedev: PS/2 mouse device common for all mice [ 3.138290] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 3.140999] rtc_cmos 00:05: RTC can wake from S4 [ 3.144168] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 3.147412] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 3.152013] hidraw: raw HID events driver (C) Jiri Kosina [ 3.153588] usbcore: registered new interface driver usbhid [ 3.155344] usbhid: USB HID core driver [ 3.156669] drop_monitor: Initializing network drop monitor service [ 3.158462] Netfilter messages via NETLINK v0.30. [ 3.159831] TCP: cubic registered [ 3.160818] Initializing XFRM netlink socket [ 3.162594] NET: Registered protocol family 10 [ 3.164863] NET: Registered protocol family 17 [ 3.166322] Key type dns_resolver registered [ 3.168139] mce: Using 10 MCE banks [ 3.169808] Loading compiled-in X.509 certificates [ 3.172261] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 3.174985] registered taskstats version 1 [ 3.178626] modprobe (71) used greatest stack depth: 13376 bytes left [ 3.183313] Key type trusted registered [ 3.187632] Key type encrypted registered [ 3.189282] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 3.193485] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 3.196927] rtc_cmos 00:05: setting system clock to 2024-04-16 14:48:21 UTC (1713278901) [ 3.199643] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 3.202212] Write protecting the kernel read-only data: 12288k [ 3.203994] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 3.205767] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 3.215360] random: systemd: uninitialized urandom read (16 bytes read) [ 3.219046] random: systemd: uninitialized urandom read (16 bytes read) [ 3.221531] random: systemd: uninitialized urandom read (16 bytes read) [ 3.227146] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 3.233355] systemd[1]: Detected virtualization kvm. [ 3.234788] systemd[1]: Detected architecture x86-64. [ 3.236258] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 3.241943] systemd[1]: No hostname configured. [ 3.243645] systemd[1]: Set hostname to . [ 3.247839] random: systemd: uninitialized urandom read (16 bytes read) [ 3.249709] systemd[1]: Initializing machine ID from random generator. [ 3.304462] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 3.308573] random: systemd: uninitialized urandom read (16 bytes read) [ 3.310921] random: systemd: uninitialized urandom read (16 bytes read) [ 3.313343] random: systemd: uninitialized urandom read (16 bytes read) [ 3.315796] random: systemd: uninitialized urandom read (16 bytes read) [ 3.319327] random: systemd: uninitialized urandom read (16 bytes read) [ 3.321647] random: systemd: uninitialized urandom read (16 bytes read) [ 3.332332] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 3.336075] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 3.341307] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 3.344449] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 3.347506] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 3.351837] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 3.358258] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 3.362406] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 3.366694] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 3.373630] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 3.379540] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 3.384876] systemd[1]: Starting Journal Service... Starting Journal Service... [ 3.390905] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 3.395342] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 3.400682] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 3.407431] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 3.414942] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 3.420021] systemd[1]: Started Create list of required static device nodes for the current kernel. [ 3.422816] tsc: Refined TSC clocksource calibration: 2399.959 MHz [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 3.428180] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. Starting Create Static Device Nodes in /dev... Starting Apply Kernel Variables... [ OK ] Started Create Static Device Nodes in /dev. [ OK [ 3.446760] random: fast init done ] Started Apply Kernel Variables. [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ OK ] Started udev Coldplug all Devices. [ OK ] Reached target System Initialization. Starting dracut initqueue hook... Starting Show Plymouth Boot Screen... [ 3.950083] scsi host0: ata_piix [ 3.954016] scsi host1: ata_piix [ 3.955286] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 3.957442] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Basic System. [ 3.990585] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ 3.993964] ip (314) used greatest stack depth: 13080 bytes left %G[ 4.047471] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 4.048997] ip (344) used greatest stack depth: 12464 bytes left [ 4.120765] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 6.211491] ip (382) used greatest stack depth: 12240 bytes left [ 6.311685] dracut-initqueue[275]: RTNETLINK answers: File exists [ 6.563194] dracut-initqueue[275]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Mounting /sysroot... [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... [ 7.258233] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... [ OK ] Stopped target Timers. [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Basic System. [ OK ] Stopped target Sockets. [ OK ] Stopped target System Initialization. [ OK ] Stopped target Slices. Starting Plymouth switch root service... [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped target Swap. [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Stopped target Local File Systems. [ OK ] Stopped target Paths. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Plymouth switch root service. [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. Starting Switch Root... [ 7.821568] systemd-journald[103]: Received SIGTERM from PID 1 (systemd). [ 8.136896] SELinux: Disabled at runtime. [ 8.231397] ip_tables: (C) 2000-2006 Netfilter Core Team [ 8.236423] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Listening on udev Control Socket. [ OK ] Reached target Local Encrypted Volumes. Starting Load Kernel Modules... [ OK ] Listening on udev Kernel Socket. Starting udev Coldplug all Devices... [ OK ] Reached target rpc_pipefs.target. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Created slice User and Session Slice. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. Starting Set Up Additional Binary Formats... [ OK ] Created slice system-getty.slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. Mounting Huge Pages File System... [ OK ] Reached target Slices. Starting Remount Root and Kernel File Systems... Mounting POSIX Message Queue File System... [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd Root File System. [ OK ] Stopped target Initrd File Systems. Mounting Debug File System... Starting Create list of required st... nodes for the current kernel... [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Load Kernel Modules. [ OK ] Started Create list of required sta...ce nodes for the current kernel. Mounting Arbitrary Executable File Formats File System... Starting Create Static Device Nodes in /dev... Starting Apply Kernel Variables... [ OK ] Started Journal Service. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started udev Coldplug all Devices. [ OK ] Started Apply Kernel Variables. [ OK ] Started Set Up Additional Binary Formats. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Configure read-only root support... Starting Flush Journal to Persistent Storage... [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... [ OK ] Mounted /mnt. [ 8.886184] systemd-journald[568]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 9.166316] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 9.172792] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ 9.235062] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/vda. Mounting /home/green/git/lustre-release... [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ 9.288478] AVX version of gcm_enc/dec engaged. [ 9.290023] AES CTR mode by8 optimization enabled [ 9.315544] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 9.319116] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Mounted /home/green/git/lustre-release. [ OK ] Reached target Swap. [ 9.353525] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 9.359175] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) %G[ 9.519026] EDAC MC: Ver: 3.0.0 [ 9.532769] EDAC sbridge: Ver: 1.1.2 [ 12.443364] mount.nfs (770) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Rebuild Journal Catalog... Starting Mark the need to relabel after reboot... Starting Tell Plymouth To Write Out Runtime Data... Starting Preprocess NFS configuration... Starting Create Volatile Files and Directories... [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. [ OK ] Started Tell Plymouth To Write Out Runtime Data. Starting Update UTMP about System Boot/Shutdown... Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. [ OK ] Started D-Bus System Message Bus. Starting Network Manager... Starting Login Service... Starting GSSAPI Proxy Daemon... Starting Dump dmesg to /var/log/dmesg... [ OK ] Started Login Service. [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... Starting Network Manager Wait Online... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Terminate Plymouth Boot Screen... Starting Wait for Plymouth Boot Screen to Quit... [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg149-server login: [ 21.934653] device-mapper: uevent: version 1.0.3 [ 21.937006] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 26.255193] libcfs: loading out-of-tree module taints kernel. [ 26.257227] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 26.282323] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_hostid [ 30.890460] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing load_modules_local [ 31.071584] alg: No test for adler32 (adler32-zlib) [ 31.821789] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 31.973972] Lustre: Lustre: Build Version: 2.15.62_23_gf2ea66d [ 32.161807] LNet: Added LNI 192.168.201.149@tcp [8/256/0/180] [ 32.163498] LNet: Accept secure, port 988 [ 33.706791] Key type lgssc registered [ 34.004380] Lustre: Echo OBD driver; http://www.lustre.org/ [ 36.868563] icp: module license 'CDDL' taints kernel. [ 36.870078] Disabling lock debugging due to kernel taint [ 39.573254] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 42.369394] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 46.723327] LDISKFS-fs (vdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 48.635555] LDISKFS-fs (vde): file extents enabled, maximum tree depth=5 [ 48.641796] LDISKFS-fs (vde): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 50.586157] LDISKFS-fs (vdf): file extents enabled, maximum tree depth=5 [ 50.589511] LDISKFS-fs (vdf): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 53.760211] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing load_modules_local [ 56.875705] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 56.893872] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 56.902038] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 57.980840] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 57.988926] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 58.025482] Lustre: lustre-MDT0000: new disk, initializing [ 58.044967] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 58.051136] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 58.076465] mount.lustre (6908) used greatest stack depth: 10112 bytes left [ 58.822406] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 62.943305] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 62.967592] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 62.988251] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 62.995682] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space. [ 62.998048] Lustre: Skipped 1 previous similar message [ 63.033554] Lustre: lustre-MDT0001: new disk, initializing [ 63.050069] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 63.056623] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 63.059062] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 63.872269] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 68.139094] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 68.144839] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 68.168762] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 68.172306] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 68.250160] Lustre: lustre-OST0000: new disk, initializing [ 68.253366] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 68.270398] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 69.615932] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 73.058244] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 73.060909] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 73.068679] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 73.844810] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 73.847748] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 73.863490] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 73.866650] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 73.895543] Lustre: lustre-OST0001: new disk, initializing [ 73.897841] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 73.911900] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 73.931870] random: crng init done [ 75.204689] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 80.069609] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 81.497615] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 81.501256] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 81.512870] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 86.271336] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 91.924378] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing check_logdir /tmp/testlogs/ [ 92.770538] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing yml_node [ 93.758276] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 94.404830] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 95.699559] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 96.745573] Lustre: DEBUG MARKER: -----============= acceptance-small: recovery-small ============----- Tue Apr 16 10:49:54 EDT 2024 [ 99.477146] Lustre: DEBUG MARKER: excepting tests: 136 [ 100.108056] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing check_config_client /mnt/lustre [ 104.773111] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 105.572256] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 106.137153] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 107.938552] Lustre: DEBUG MARKER: == recovery-small test 1: create, chmod, stat: drop req, drop rep ========================================================== 10:50:06 (1713279006) [ 108.191377] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 124.206595] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 124.676323] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 124.677638] LustreError: 10236:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880092baad80 x1796503162067136/t4294967300(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:568/0 lens 520/448 e 0 to 0 dl 1713279033 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 140.689179] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 140.697211] Lustre: 6930:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88013055e300 x1796503162067136/t4294967300(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:585/0 lens 520/2880 e 0 to 0 dl 1713279050 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 141.190071] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 157.204432] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 157.679921] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 157.681957] LustreError: 12734:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88007d6c2d80 x1796503162069440/t4294967302(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:601/0 lens 488/456 e 0 to 0 dl 1713279066 ref 1 fl Interpret:/200/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 173.690447] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 173.698663] Lustre: 6929:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88013055dc00 x1796503162069440/t4294967302(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:618/0 lens 488/3152 e 0 to 0 dl 1713279083 ref 1 fl Interpret:/202/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 174.145079] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 190.157619] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 190.597229] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 190.598651] LustreError: 12734:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880092bb6a00 x1796503162071232/t0(0) o34->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:634/0 lens 472/464 e 0 to 0 dl 1713279099 ref 1 fl Interpret:/200/0 rc 0/0 job:'statone.0' uid:0 gid:0 [ 206.610499] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 209.530158] Lustre: DEBUG MARKER: == recovery-small test 4: open: drop req, drop rep ======= 10:51:47 (1713279107) [ 209.782579] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 225.795556] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 226.262016] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 226.263387] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012f095880 x1796503162074240/t4294967308(0) o35->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:670/0 lens 392/456 e 0 to 0 dl 1713279135 ref 1 fl Interpret:/200/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 242.264606] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012f06d500 x1796503162074240/t4294967308(0) o35->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:686/0 lens 392/456 e 0 to 0 dl 1713279151 ref 1 fl Interpret:/202/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 245.187033] Lustre: DEBUG MARKER: == recovery-small test 5: rename: drop req, drop rep ===== 10:52:23 (1713279143) [ 245.442249] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 261.459218] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 261.461991] Lustre: Skipped 1 previous similar message [ 261.951881] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 261.954138] LustreError: 6944:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012fe2b480 x1796503162077696/t4294967312(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:706/0 lens 552/456 e 0 to 0 dl 1713279171 ref 1 fl Interpret:/200/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 277.952838] Lustre: 6944:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880089952680 x1796503162077696/t4294967312(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:722/0 lens 552/2888 e 0 to 0 dl 1713279187 ref 1 fl Interpret:/202/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 281.150001] Lustre: DEBUG MARKER: == recovery-small test 6: link, unlink: drop req, drop rep ========================================================== 10:52:59 (1713279179) [ 281.432697] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 297.901384] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 297.902880] LustreError: 8076:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800899fe300 x1796503162081536/t4294967317(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:742/0 lens 512/440 e 0 to 0 dl 1713279207 ref 1 fl Interpret:/200/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 313.902549] Lustre: 10236:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012e307800 x1796503162081536/t4294967317(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:3/0 lens 512/440 e 0 to 0 dl 1713279223 ref 1 fl Interpret:/202/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 314.389191] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 330.412647] Lustre: lustre-MDT0000: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 330.415144] Lustre: Skipped 3 previous similar messages [ 330.918512] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 330.919947] LustreError: 12734:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88013055c000 x1796503162084416/t4294967319(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:20/0 lens 504/456 e 0 to 0 dl 1713279240 ref 1 fl Interpret:/200/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 346.919767] Lustre: 6930:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880089846300 x1796503162084416/t4294967319(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:36/0 lens 504/2888 e 0 to 0 dl 1713279256 ref 1 fl Interpret:/202/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 350.224159] Lustre: DEBUG MARKER: == recovery-small test 8: touch: drop rep (bug 1423) ===== 10:54:08 (1713279248) [ 366.477817] Lustre: 12734:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880089a74700 x1796503162086080/t4294967322(0) o36->716a7e99-fdaf-4e6a-9b51-36efd0c3ceee@192.168.201.49@tcp:55/0 lens 488/3152 e 0 to 0 dl 1713279275 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 369.693622] Lustre: DEBUG MARKER: == recovery-small test 9: pause bulk on OST (bug 1420) === 10:54:27 (1713279267) [ 370.214587] LustreError: 9204:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 5000ms [ 375.216824] LustreError: 9204:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 378.770768] Lustre: DEBUG MARKER: == recovery-small test 10a: finish request on server after client eviction (bug 1521) ========================================================== 10:54:36 (1713279276) [ 394.853873] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279277/real 1713279277] req@ffff88012f147480 x1796503167400256/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279293 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 397.227846] Lustre: 9199:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279279/real 1713279279] req@ffff88007d4c2680 x1796503167400704/t0(0) o104->lustre-OST0001@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279295 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 397.236351] Lustre: 9199:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 410.862891] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279293/real 1713279293] req@ffff88012f147480 x1796503167400256/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279309 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 413.227859] Lustre: 10241:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279295/real 1713279295] req@ffff8800899fc380 x1796503167400576/t0(0) o104->lustre-OST0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279311 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 418.922825] Lustre: mdt00_001: service thread pid 6930 was inactive for 40.069 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 418.928026] Pid: 6930, comm: mdt00_001 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 418.930479] Call Trace: [ 418.931138] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 418.932825] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 418.934755] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 418.937071] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 418.938613] [<0>] ldlm_cli_enqueue_local+0x1ec/0x880 [ptlrpc] [ 418.940440] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 418.942676] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 418.944121] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 418.945677] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 418.947406] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 418.948963] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 418.950905] [<0>] mdt_reint+0x67/0x150 [mdt] [ 418.952341] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 418.954359] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 418.956096] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 418.957359] [<0>] kthread+0xe4/0xf0 [ 418.958421] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 418.959931] [<0>] 0xfffffffffffffffe [ 421.354831] Lustre: ll_ost00_002: service thread pid 9199 was inactive for 40.126 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 421.354846] Pid: 16196, comm: ll_ost00_005 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 421.354846] Call Trace: [ 421.354867] Lustre: ll_ost00_003: service thread pid 10241 was inactive for 40.126 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 421.354937] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 421.354992] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 421.355040] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 421.355120] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 421.355172] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 421.355190] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 421.355196] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 421.355257] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 421.355301] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 421.355377] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 421.355383] [<0>] kthread+0xe4/0xf0 [ 421.355387] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 421.355421] [<0>] 0xfffffffffffffffe [ 421.409809] Lustre: Skipped 1 previous similar message [ 421.412556] Pid: 9199, comm: ll_ost00_002 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 421.415861] Call Trace: [ 421.416702] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 421.418086] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 421.419647] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 421.421574] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 421.422999] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 421.424729] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 421.425855] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 421.427084] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 421.428724] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 421.430384] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 421.431349] [<0>] kthread+0xe4/0xf0 [ 421.432773] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 421.434332] [<0>] 0xfffffffffffffffe [ 426.871864] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279309/real 1713279309] req@ffff88012f147480 x1796503167400256/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279325 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 426.879589] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 442.882865] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279325/real 1713279325] req@ffff88012f147480 x1796503167400256/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279341 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 442.892546] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 458.894903] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279341/real 1713279341] req@ffff88012f147480 x1796503167400256/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279357 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 458.904029] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 490.906851] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713279373/real 1713279373] req@ffff88012f147480 x1796503167400256/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713279389 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 490.915411] Lustre: 6930:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 490.918525] LustreError: 6930:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.49@tcp) failed to reply to blocking AST (req@ffff88012f147480 x1796503167400256 status 0 rc -110), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff88012db906c0/0x348e4884490aa8 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b24a expref: 9 pid: 6930 timeout: 574 lvb_type: 0 [ 490.929568] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.201.49@tcp was evicted due to a lock blocking callback time out: rc -110 [ 490.933838] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.201.49@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff88012db906c0/0x348e4884490aa8 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b24a expref: 10 pid: 6930 timeout: 0 lvb_type: 0 [ 493.235932] LustreError: 16196:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.49@tcp) failed to reply to blocking AST (req@ffff880092bb7480 x1796503167400640 status 0 rc -110), evict it ns: filter-lustre-OST0001_UUID lock: ffff88009d5e7840/0x348e4884490a1c lrc: 4/0,0 mode: PW/PW res: [0x2c0000401:0x5:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4194303) gid 0 flags: 0x60000400030020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b22e expref: 7 pid: 15604 timeout: 576 lvb_type: 0 [ 493.235934] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.201.49@tcp was evicted due to a lock blocking callback time out: rc -110 [ 493.236011] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.201.49@tcp ns: filter-lustre-OST0000_UUID lock: ffff88009d5e6400/0x348e48844909c8 lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x4:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400030020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b20b expref: 7 pid: 15604 timeout: 0 lvb_type: 0 [ 493.272979] LustreError: 16196:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) Skipped 2 previous similar messages [ 494.106375] Lustre: DEBUG MARKER: == recovery-small test 10b: re-send BL AST =============== 10:56:32 (1713279392) [ 513.306280] Lustre: DEBUG MARKER: == recovery-small test 10c: re-send BL AST vs reconnect race (LU-5569) ========================================================== 10:56:51 (1713279411) [ 514.402045] Lustre: lustre-MDT0001: Client 716a7e99-fdaf-4e6a-9b51-36efd0c3ceee (at 192.168.201.49@tcp) reconnecting [ 514.406432] Lustre: Skipped 2 previous similar messages [ 517.409114] Lustre: DEBUG MARKER: == recovery-small test 10d: test failed blocking ast ===== 10:56:55 (1713279415) [ 518.914655] LustreError: 9197:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.49@tcp) returned error from blocking AST (req@ffff88012dee9180 x1796503167439168 status -71 rc -71), evict it ns: filter-lustre-OST0000_UUID lock: ffff88012db90000/0x348e4884490eb4 lrc: 4/0,0 mode: PW/PW res: [0x280000401:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b45e expref: 5 pid: 9197 timeout: 618 lvb_type: 0 [ 518.923734] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.201.49@tcp was evicted due to a lock blocking callback time out: rc -71 [ 518.926659] LustreError: Skipped 2 previous similar messages [ 518.927849] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.201.49@tcp ns: filter-lustre-OST0000_UUID lock: ffff88012db90000/0x348e4884490eb4 lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b45e expref: 6 pid: 9197 timeout: 0 lvb_type: 0 [ 518.943815] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 522.192077] Lustre: DEBUG MARKER: == recovery-small test 10e: re-send BL AST vs reconnect race 2 ========================================================== 10:57:00 (1713279420) [ 522.529421] Lustre: DEBUG MARKER: SKIP: recovery-small test_10e need two clients [ 524.384516] Lustre: DEBUG MARKER: == recovery-small test 11: wake up a thread waiting for completion after eviction (b=2460) ========================================================== 10:57:02 (1713279422) [ 544.915924] Lustre: DEBUG MARKER: == recovery-small test 12: recover from timed out resend in ptlrpcd (b=2494) ========================================================== 10:57:22 (1713279442) [ 545.187053] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 587.236792] Lustre: DEBUG MARKER: == recovery-small test 13: mdc_readpage restart test (bug 1138) ========================================================== 10:58:05 (1713279485) [ 606.738477] Lustre: DEBUG MARKER: == recovery-small test 14: mdc_readpage resend test (bug 1138) ========================================================== 10:58:24 (1713279504) [ 607.014088] Lustre: *** cfs_fail_loc=106, val=0*** [ 607.015409] Lustre: Skipped 1 previous similar message [ 610.238811] Lustre: DEBUG MARKER: == recovery-small test 15: failed open (-ENOMEM) ========= 10:58:28 (1713279508) [ 610.481087] Lustre: *** cfs_fail_loc=128, val=0*** [ 613.382620] Lustre: DEBUG MARKER: == recovery-small test 16: timeout bulk put, don't evict client (2732) ========================================================== 10:58:31 (1713279511) [ 613.757562] Lustre: *** cfs_fail_loc=504, val=0*** [ 613.758899] LustreError: 24783:0:(ldlm_lib.c:3601:target_bulk_io()) @@@ truncated bulk READ 0(102400) req@ffff880130373480 x1796503162131072/t0(0) o3->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:303/0 lens 488/440 e 0 to 0 dl 1713279523 ref 1 fl Interpret:/200/0 rc 0/0 job:'cmp.0' uid:0 gid:0 [ 613.764734] Lustre: lustre-OST0000: Bulk IO read error with 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp), client will retry: rc -110 [ 653.680754] Lustre: DEBUG MARKER: == recovery-small test 17a: timeout bulk get, don't evict client (2732) ========================================================== 10:59:11 (1713279551) [ 698.242007] Lustre: DEBUG MARKER: == recovery-small test 17b: timeout bulk get, dont evict client (3582) ========================================================== 10:59:56 (1713279596) [ 698.591524] Lustre: DEBUG MARKER: SKIP: recovery-small test_17b Needs multiple clients [ 700.480001] Lustre: DEBUG MARKER: == recovery-small test 18a: manual ost invalidate clears page cache immediately ========================================================== 10:59:58 (1713279598) [ 703.521646] Lustre: DEBUG MARKER: == recovery-small test 18b: eviction and reconnect clears page cache (2766) ========================================================== 11:00:01 (1713279601) [ 704.005923] Lustre: 31604:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 647db76f-4734-4041-89e8-8006c132c5b7 at adminstrative request [ 728.991550] Lustre: DEBUG MARKER: == recovery-small test 18c: Dropped connect reply after eviction handing (14755) ========================================================== 11:00:27 (1713279627) [ 729.438092] Lustre: 32335:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 647db76f-4734-4041-89e8-8006c132c5b7 at adminstrative request [ 730.695596] Lustre: *** cfs_fail_loc=225, val=0*** [ 730.696981] Lustre: Skipped 1 previous similar message [ 745.664006] Lustre: DEBUG MARKER: == recovery-small test 19a: test expired_lock_main on mds (2867) ========================================================== 11:00:43 (1713279643) [ 746.147568] Lustre: *** cfs_fail_loc=304, val=0*** [ 762.159685] Lustre: *** cfs_fail_loc=304, val=0*** [ 778.180678] Lustre: lustre-MDT0000: Client 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp) reconnecting [ 778.183398] Lustre: Skipped 6 previous similar messages [ 778.186246] Lustre: *** cfs_fail_loc=304, val=0*** [ 786.154835] ptlrpc_watchdog_fire: 1 callbacks suppressed [ 786.156859] Lustre: mdt00_003: service thread pid 8076 was inactive for 40.009 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 786.160600] Pid: 8076, comm: mdt00_003 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 786.162511] Call Trace: [ 786.163237] [<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc] [ 786.164371] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 786.165727] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 786.167077] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 786.168146] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 786.169416] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 786.170515] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 786.171493] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 786.172539] [<0>] mdt_reint+0x67/0x150 [mdt] [ 786.173474] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 786.175282] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 786.176643] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 786.177810] [<0>] kthread+0xe4/0xf0 [ 786.178447] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 786.179371] [<0>] 0xfffffffffffffffe [ 794.188960] Lustre: *** cfs_fail_loc=304, val=0*** [ 810.214757] Lustre: *** cfs_fail_loc=304, val=0*** [ 826.256427] Lustre: *** cfs_fail_loc=304, val=0*** [ 842.278687] Lustre: *** cfs_fail_loc=304, val=0*** [ 846.314903] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.201.49@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff8800aa5b3840/0x348e48844917f2 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b759 expref: 17 pid: 6929 timeout: 845 lvb_type: 0 [ 850.574309] Lustre: DEBUG MARKER: == recovery-small test 19b: test expired_lock_main on ost (2867) ========================================================== 11:02:28 (1713279748) [ 882.720006] Lustre: *** cfs_fail_loc=304, val=0*** [ 882.721217] Lustre: Skipped 4 previous similar messages [ 946.777645] Lustre: *** cfs_fail_loc=304, val=0*** [ 946.779280] Lustre: Skipped 7 previous similar messages [ 951.274885] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.201.49@tcp ns: filter-lustre-OST0001_UUID lock: ffff88012db91b00/0x348e4884491aed lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0xc:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355a4b90b expref: 6 pid: 10241 timeout: 950 lvb_type: 0 [ 954.911388] Lustre: DEBUG MARKER: == recovery-small test 19c: check reconnect and lock resend do not trigger expired_lock_main ========================================================== 11:04:12 (1713279852) [ 965.697286] Lustre: DEBUG MARKER: == recovery-small test 20a: ldlm_handle_enqueue error (should return error) ========================================================== 11:04:23 (1713279863) [ 969.324283] Lustre: DEBUG MARKER: == recovery-small test 20b: ldlm_handle_enqueue error (should return error) ========================================================== 11:04:27 (1713279867) [ 973.155554] Lustre: DEBUG MARKER: == recovery-small test 21a: drop close request while close and open are both in flight ========================================================== 11:04:31 (1713279871) [ 973.504412] LustreError: 6929:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 974.809848] LustreError: 6929:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 974.944908] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 995.152069] Lustre: DEBUG MARKER: == recovery-small test 21b: drop open request while close and open are both in flight ========================================================== 11:04:53 (1713279893) [ 1141.082799] Lustre: DEBUG MARKER: == recovery-small test 21c: drop both request while close and open are both in flight ========================================================== 11:07:18 (1713280038) [ 1164.997555] Lustre: DEBUG MARKER: == recovery-small test 21d: drop close reply while close and open are both in flight ========================================================== 11:07:43 (1713280063) [ 1165.331681] LustreError: 6930:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 1166.634776] LustreError: 6930:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 1166.842722] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 1166.845023] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012fe2bb80 x1796503162214528/t4294967535(0) o35->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:101/0 lens 392/456 e 0 to 0 dl 1713280076 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1166.854374] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1182.843569] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88007fa06c50 x1796503162214528/t4294967535(0) o35->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:117/0 lens 392/456 e 0 to 0 dl 1713280092 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1186.758450] Lustre: DEBUG MARKER: == recovery-small test 21e: drop open reply while close and open are both in flight ========================================================== 11:08:04 (1713280084) [ 1187.080126] LustreError: 12734:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880092a5b100 x1796503162219200/t4294967552(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:246/0 lens 488/456 e 0 to 0 dl 1713280221 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1328.092375] Lustre: lustre-MDT0000: Client 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp) reconnecting [ 1328.097097] Lustre: Skipped 20 previous similar messages [ 1328.113658] Lustre: 8076:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880089bc6d80 x1796503162219200/t4294967552(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:387/0 lens 488/3152 e 0 to 0 dl 1713280362 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1330.617050] Lustre: DEBUG MARKER: == recovery-small test 21f: drop both reply while close and open are both in flight ========================================================== 11:10:28 (1713280228) [ 1331.059083] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1331.062416] Lustre: Skipped 1 previous similar message [ 1331.065877] LustreError: 6929:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012a2fc450 x1796503162231616/t4294967571(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:390/0 lens 488/456 e 0 to 0 dl 1713280365 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1348.711480] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008980c000 x1796503162231616/t4294967571(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:408/0 lens 488/3152 e 0 to 0 dl 1713280383 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1348.721012] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1353.345400] Lustre: DEBUG MARKER: == recovery-small test 21g: drop open reply and close request while close and open are both in flight ========================================================== 11:10:51 (1713280251) [ 1353.788476] LustreError: 8076:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008996b800 x1796503162237120/t4294967590(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:413/0 lens 488/456 e 0 to 0 dl 1713280388 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1353.802567] LustreError: 8076:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1355.390110] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 1355.392742] Lustre: Skipped 3 previous similar messages [ 1371.392919] Lustre: 6930:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012ff81500 x1796503162237120/t4294967590(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:430/0 lens 488/3152 e 0 to 0 dl 1713280405 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1375.149985] Lustre: DEBUG MARKER: == recovery-small test 21h: drop open request and close reply while close and open are both in flight ========================================================== 11:11:13 (1713280273) [ 1397.547304] Lustre: DEBUG MARKER: == recovery-small test 22: drop close request and do mknod ========================================================== 11:11:35 (1713280295) [ 1417.657601] Lustre: DEBUG MARKER: == recovery-small test 23: client hang when close a file after mds crash ========================================================== 11:11:55 (1713280315) [ 1424.004702] Lustre: Failing over lustre-MDT0000 [ 1424.093431] Lustre: server umount lustre-MDT0000 complete [ 1425.227335] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1425.229433] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1425.232515] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1428.652604] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1428.652971] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1428.666703] Lustre: Skipped 2 previous similar messages [ 1433.659834] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1433.667053] LustreError: Skipped 7 previous similar messages [ 1436.864384] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1436.917176] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1437.017867] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1437.032545] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1438.077546] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1438.861874] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1442.030938] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1442.045247] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1442.071761] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:21 to 0x2c0000401:65) [ 1442.071794] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:23 to 0x280000401:65) [ 1442.773017] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1443.332043] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1448.724845] Lustre: DEBUG MARKER: == recovery-small test 24a: fsync error (should return error) ========================================================== 11:12:26 (1713280346) [ 1449.058687] Lustre: 14781:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 647db76f-4734-4041-89e8-8006c132c5b7 at adminstrative request [ 1452.980121] Lustre: DEBUG MARKER: == recovery-small test 24b: test dirty page discard due to client eviction ========================================================== 11:12:31 (1713280351) [ 1453.514596] Lustre: 15496:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 647db76f-4734-4041-89e8-8006c132c5b7 at adminstrative request [ 1458.124535] Lustre: DEBUG MARKER: == recovery-small test 26a: evict dead exports =========== 11:12:36 (1713280356) [ 1458.747757] Lustre: DEBUG MARKER: SKIP: recovery-small test_26a msg and ost1 are at the same node [ 1461.027162] Lustre: DEBUG MARKER: == recovery-small test 26b: evict dead exports =========== 11:12:39 (1713280359) [ 1461.559889] Lustre: DEBUG MARKER: SKIP: recovery-small test_26b msg and ost1 are at the same node [ 1463.983610] Lustre: DEBUG MARKER: == recovery-small test 27: fail LOV while using OSC's ==== 11:12:42 (1713280362) [ 1465.500299] Lustre: Failing over lustre-MDT0000 [ 1465.613923] Lustre: server umount lustre-MDT0000 complete [ 1467.067799] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1467.068507] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1467.068509] LustreError: Skipped 1 previous similar message [ 1467.077748] Lustre: Skipped 3 previous similar messages [ 1472.075459] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1472.079371] LustreError: Skipped 7 previous similar messages [ 1477.700540] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1477.731373] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1477.825784] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1477.861337] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1478.613456] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1478.924468] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1482.828989] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1482.830804] Lustre: Skipped 3 previous similar messages [ 1482.837471] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 1482.841569] Lustre: 6930:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012ff80a80 x1796503162361600/t8589935312(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:417/0 lens 504/2888 e 0 to 0 dl 1713280392 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1482.851362] Lustre: 6930:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1482.854860] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:184 to 0x280000401:225) [ 1482.858193] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:184 to 0x2c0000401:225) [ 1571.151049] Lustre: Failing over lustre-MDT0000 [ 1571.288757] Lustre: server umount lustre-MDT0000 complete [ 1572.988505] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1572.989263] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1572.989266] LustreError: Skipped 5 previous similar messages [ 1573.001824] Lustre: Skipped 3 previous similar messages [ 1584.182846] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1584.241082] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1584.349933] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1584.373558] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1585.527832] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1589.100798] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1589.359411] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1589.363773] Lustre: Skipped 3 previous similar messages [ 1589.373421] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1589.381880] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88006f787100 x1796503168565568/t12884941851(0) o101->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:523/0 lens 672/3488 e 0 to 0 dl 1713280498 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1589.405775] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6857 to 0x2c0000401:6881) [ 1589.405806] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6856 to 0x280000401:6881) [ 1593.779224] Lustre: DEBUG MARKER: == recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 11:14:51 (1713280491) [ 1609.926006] Lustre: 6931:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713280492/real 1713280492] req@ffff88012e306d80 x1796503169242304/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713280508 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 1609.940815] Lustre: 6931:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [ 1611.705031] Lustre: *** cfs_fail_loc=12f, val=0*** [ 1611.707512] LustreError: 17320:0:(tgt_lastrcvd.c:1071:tgt_client_new()) lustre-OST0000: no room for 3 clients - fix LR_MAX_CLIENTS [ 1614.812764] Lustre: Failing over lustre-MDT0000 [ 1614.899487] Lustre: server umount lustre-MDT0000 complete [ 1619.152344] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.49@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1619.160188] LustreError: Skipped 14 previous similar messages [ 1619.404265] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1619.411259] Lustre: Skipped 3 previous similar messages [ 1627.789310] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1627.850287] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1627.979787] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1628.002513] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1629.057519] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1629.164815] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1632.990960] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1632.995271] Lustre: Skipped 3 previous similar messages [ 1633.006074] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 1633.033985] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6857 to 0x2c0000401:6913) [ 1633.034021] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6883 to 0x280000401:6977) [ 1633.913293] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1634.505041] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1640.197005] Lustre: DEBUG MARKER: == recovery-small test 29a: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 11:15:38 (1713280538) [ 1641.197853] Lustre: Failing over lustre-MDT0000 [ 1641.312840] Lustre: server umount lustre-MDT0000 complete [ 1643.004574] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1643.012287] Lustre: Skipped 3 previous similar messages [ 1644.495372] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1644.551113] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1644.633210] Lustre: *** cfs_fail_loc=711, val=0*** [ 1644.660275] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1644.673195] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1644.673321] Lustre: lustre-MDT0000: Aborting client recovery [ 1644.673325] LustreError: 27551:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1644.681957] Lustre: 27581:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1649.663164] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1649.666971] Lustre: Skipped 3 previous similar messages [ 1649.667090] Lustre: 27581:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@0@lo [ 1649.667102] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1649.667549] Lustre: 27581:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1649.669515] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1649.675226] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000401:0x1:0x0] [ 1649.693591] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation ldlm_enqueue to node 0@lo failed: rc = -107 [ 1649.697484] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1649.699995] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6857 to 0x2c0000401:6945) [ 1649.700605] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6883 to 0x280000401:7009) [ 1650.738359] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1661.898529] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 [ 1661.978377] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1665.575123] Lustre: DEBUG MARKER: == recovery-small test 29b: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 11:16:03 (1713280563) [ 1666.529437] Lustre: Failing over lustre-OST0000 [ 1666.555095] Lustre: server umount lustre-OST0000 complete [ 1669.181845] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 1669.185429] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 1669.240597] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1669.246773] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1669.252963] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 1669.253088] Lustre: lustre-OST0000: Aborting recovery [ 1669.253091] LustreError: 29862:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 1669.258928] Lustre: Skipped 2 previous similar messages [ 1669.260514] Lustre: 29875:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1669.264065] Lustre: 29875:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 1669.268425] Lustre: 29875:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client 647db76f-4734-4041-89e8-8006c132c5b7@ [ 1669.273252] Lustre: lustre-OST0000: disconnecting 3 stale clients [ 1669.277175] LustreError: 29875:0:(ofd_obd.c:1315:ofd_iocontrol()) lustre-OST0000: iocontrol from 'tgt_recover_0' cmd=c00866c1 _IOWR('f', 193, 8) unrecognized: rc = -25 [ 1670.453880] Lustre: *** cfs_fail_loc=711, val=0*** [ 1670.483862] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1670.489394] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1670.493037] Lustre: Skipped 4 previous similar messages [ 1670.785620] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1685.860067] Lustre: DEBUG MARKER: == recovery-small test 50: failover MDS under load ======= 11:16:23 (1713280583) [ 1696.449856] Lustre: Failing over lustre-MDT0000 [ 1696.463849] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.49@tcp (stopping) [ 1696.525639] Lustre: server umount lustre-MDT0000 complete [ 1699.278152] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.49@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1699.288180] LustreError: Skipped 14 previous similar messages [ 1709.023995] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1709.068932] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1709.170529] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1709.188281] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1709.190009] Lustre: Skipped 2 previous similar messages [ 1709.292283] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1710.137409] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1714.172406] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1714.179189] Lustre: lustre-MDT0000: Recovery over after 0:05, of 2 clients 2 recovered and 0 were evicted. [ 1714.198090] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:7688 to 0x2c0000401:7713) [ 1714.198093] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:7752 to 0x280000401:7777) [ 1714.772211] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1715.161272] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1776.921782] Lustre: Failing over lustre-MDT0000 [ 1777.067819] Lustre: server umount lustre-MDT0000 complete [ 1779.291923] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1779.293227] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1779.293230] LustreError: Skipped 9 previous similar messages [ 1779.307855] Lustre: Skipped 7 previous similar messages [ 1789.850377] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1789.901666] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1789.998124] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1790.014578] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1791.024940] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1794.428708] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1795.005085] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1795.007867] Lustre: Skipped 3 previous similar messages [ 1795.013687] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1795.029714] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12231 to 0x2c0000401:12257) [ 1795.032957] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:12295 to 0x280000401:12321) [ 1795.573023] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1795.965554] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1857.733235] Lustre: Failing over lustre-MDT0000 [ 1857.860986] Lustre: server umount lustre-MDT0000 complete [ 1860.123391] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1860.123762] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1860.123764] Lustre: Skipped 1 previous similar message [ 1870.246772] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1870.300160] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1870.393337] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1870.406131] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1871.406365] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1874.558030] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1875.406848] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 1875.412808] Lustre: Skipped 3 previous similar messages [ 1875.424428] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1875.452328] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:17042 to 0x2c0000401:17057) [ 1875.452371] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:17106 to 0x280000401:17121) [ 1876.070976] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1876.466321] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1901.810342] Lustre: DEBUG MARKER: == recovery-small test 51: failover MDS during recovery == 11:19:59 (1713280799) [ 1903.682977] Lustre: Failing over lustre-MDT0000 [ 1903.794409] Lustre: server umount lustre-MDT0000 complete [ 1909.613102] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.49@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1909.622743] LustreError: Skipped 34 previous similar messages [ 1916.189112] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1917.108983] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1918.043924] Lustre: DEBUG MARKER: test_51: failover in 1 sec [ 1919.628837] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1919.685479] Lustre: Failing over lustre-MDT0000 [ 1919.693803] LustreError: 4559:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1919.696383] Lustre: 3976:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1919.700560] Lustre: 3976:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1919.703906] Lustre: lustre-MDT0000-osd: cancel update llog [0x200002b10:0x1:0x0] [ 1919.713252] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1919.718079] LustreError: 3976:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff880089998e00 x1796503171999616/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1919.726601] LustreError: 3976:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1919.730909] LustreError: 3976:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1919.736378] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 0 recovered and 2 were evicted. [ 1919.755921] Lustre: 3976:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1919.826392] Lustre: server umount lustre-MDT0000 complete [ 1932.426894] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1933.525601] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1934.791101] Lustre: DEBUG MARKER: test_51: failover in 5 sec [ 1940.454497] Lustre: Failing over lustre-MDT0000 [ 1940.464565] LustreError: 5701:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1940.471037] Lustre: 5127:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1940.475294] Lustre: 5127:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1940.479327] Lustre: lustre-MDT0000-osd: cancel update llog [0x200004a50:0x1:0x0] [ 1940.488914] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1940.499789] LustreError: 5127:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff88009a1f7800 x1796503172019584/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1940.514902] LustreError: 5127:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1940.525320] LustreError: 5127:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1940.546003] Lustre: 5127:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1940.620773] Lustre: server umount lustre-MDT0000 complete [ 1953.520308] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1953.572827] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1953.577571] LustreError: Skipped 2 previous similar messages [ 1954.609526] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1954.668752] Lustre: 3490:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713280835/real 1713280835] req@ffff88006f644e00 x1796503172018880/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 0 to 1 dl 1713280851 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1955.704150] Lustre: DEBUG MARKER: test_51: failover in 10 sec [ 1966.331529] Lustre: Failing over lustre-MDT0000 [ 1966.340923] LustreError: 6851:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1966.345874] Lustre: 6270:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1966.350543] Lustre: 6270:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1966.355698] Lustre: lustre-MDT0000-osd: cancel update llog [0x200005220:0x1:0x0] [ 1966.365938] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1966.372684] LustreError: 6270:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff880089844700 x1796503172030016/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1966.383285] LustreError: 6270:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1966.389446] LustreError: 6270:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1966.413299] Lustre: 6270:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1966.492693] Lustre: server umount lustre-MDT0000 complete [ 1978.644375] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1979.560574] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 1980.494189] Lustre: DEBUG MARKER: test_51: failover in 20 sec [ 1981.759834] Lustre: 3490:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713280852/real 1713280852] req@ffff880130370000 x1796503172026560/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 1 to 1 dl 1713280880 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1981.785137] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:18714 to 0x2c0000401:18753) [ 1981.785244] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:18781 to 0x280000401:18817) [ 2001.015290] Lustre: Failing over lustre-MDT0000 [ 2001.154428] Lustre: server umount lustre-MDT0000 complete [ 2001.803416] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 2001.806246] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2001.811470] Lustre: Skipped 12 previous similar messages [ 2013.066323] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2013.182323] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 2013.186064] Lustre: Skipped 4 previous similar messages [ 2013.208896] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 2013.212454] Lustre: Skipped 4 previous similar messages [ 2014.119457] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 2014.780290] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 2014.784751] Lustre: Skipped 3 previous similar messages [ 2014.969432] Lustre: DEBUG MARKER: test_51: failover in 25 sec [ 2018.188486] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 2018.190427] Lustre: Skipped 13 previous similar messages [ 2018.196292] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 2018.198478] Lustre: Skipped 3 previous similar messages [ 2018.200607] Lustre: 8076:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009b6c4a80 x1796503180954368/t42949681968(0) o36->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:236/0 lens 512/2888 e 0 to 0 dl 1713280966 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 2018.207308] Lustre: 8076:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 2018.210277] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:20248 to 0x2c0000401:20289) [ 2018.210354] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:20312 to 0x280000401:20353) [ 2040.486393] Lustre: Failing over lustre-MDT0000 [ 2040.597091] Lustre: server umount lustre-MDT0000 complete [ 2053.148412] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2053.243551] Lustre: lustre-MDT0000: Not available for connect from 0@lo (not set up) [ 2053.246371] Lustre: Skipped 3 previous similar messages [ 2054.511816] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 2055.869158] Lustre: DEBUG MARKER: test_51: failover in 30 sec [ 2058.364372] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:22013 to 0x2c0000401:22049) [ 2058.364375] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:22077 to 0x280000401:22113) [ 2086.366138] Lustre: Failing over lustre-MDT0000 [ 2086.412114] LustreError: 3491:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800a2484000 x1796503173256384/t0(0) o6->lustre-OST0001-osc-MDT0000@0@lo:28/4 lens 544/432 e 0 to 0 dl 0 ref 1 fl Rpc:QU/200/ffffffff rc 0/-1 job:'osp-syn-1-0.0' uid:0 gid:0 [ 2086.473013] Lustre: server umount lustre-MDT0000 complete [ 2098.727954] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2098.782007] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2098.786465] LustreError: Skipped 3 previous similar messages [ 2099.871616] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 2103.917907] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:24217 to 0x2c0000401:24257) [ 2103.917963] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24280 to 0x280000401:24321) [ 2123.704692] Lustre: DEBUG MARKER: == recovery-small test 52: failover OST under load ======= 11:23:41 (1713281021) [ 2134.519287] Lustre: Failing over lustre-OST0000 [ 2134.530830] Lustre: lustre-OST0000: Not available for connect from 192.168.201.49@tcp (stopping) [ 2134.545384] Lustre: server umount lustre-OST0000 complete [ 2134.672979] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2146.811421] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2146.817139] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2148.024245] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 2148.025128] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 2148.025129] Lustre: Skipped 2 previous similar messages [ 2148.449354] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 2148.456762] Lustre: Skipped 2 previous similar messages [ 2150.471347] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2150.910571] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2465.575543] Lustre: Failing over lustre-OST0000 [ 2465.587559] Lustre: lustre-OST0000: Not available for connect from 192.168.201.49@tcp (stopping) [ 2465.601834] Lustre: server umount lustre-OST0000 complete [ 2466.453497] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2466.456280] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2466.459317] Lustre: Skipped 13 previous similar messages [ 2466.461007] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2466.465829] LustreError: Skipped 81 previous similar messages [ 2477.567876] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2477.570846] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2477.627846] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2477.631932] Lustre: Skipped 3 previous similar messages [ 2477.637019] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 2477.638916] Lustre: Skipped 3 previous similar messages [ 2478.746434] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 2478.941625] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 2479.198216] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 2479.198297] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 2479.202656] Lustre: Skipped 13 previous similar messages [ 2481.035398] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2481.481712] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2796.335898] Lustre: Failing over lustre-OST0000 [ 2796.357091] Lustre: lustre-OST0000: Not available for connect from 192.168.201.49@tcp (stopping) [ 2797.124632] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2798.365620] Lustre: server umount lustre-OST0000 complete [ 2810.830440] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2810.833235] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2810.893465] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 2812.361408] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 2814.690412] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2815.095192] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3066.356795] Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x2c0000400 to 0x2c0000402 [ 3091.535930] Lustre: DEBUG MARKER: == recovery-small test 53a: touch: drop rep ============== 11:39:49 (1713281989) [ 3092.084252] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3092.085838] Lustre: Skipped 3 previous similar messages [ 3092.087565] LustreError: 10236:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880096101180 x1796503246386496/t0(0) o101->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:516/0 lens 576/688 e 0 to 0 dl 1713282001 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3092.097897] LustreError: 10236:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 3108.099630] Lustre: lustre-MDT0000: Client 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp) reconnecting [ 3108.105532] Lustre: Skipped 4 previous similar messages [ 3112.973586] Lustre: DEBUG MARKER: == recovery-small test 53b: touch: drop rep ============== 11:40:10 (1713282010) [ 3113.498313] LustreError: 6929:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012cb33800 x1796503246388992/t0(0) o101->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:537/0 lens 576/688 e 0 to 0 dl 1713282022 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3134.484780] Lustre: DEBUG MARKER: == recovery-small test 53c: touch: drop rep ============== 11:40:32 (1713282032) [ 3134.985808] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3134.988826] Lustre: Skipped 1 previous similar message [ 3134.991377] LustreError: 21627:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880091bdce00 x1796503246390784/t55834582415(0) o101->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:559/0 lens 664/664 e 0 to 0 dl 1713282044 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3150.986154] Lustre: 10236:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a6a3c700 x1796503246390784/t55834582415(0) o101->647db76f-4734-4041-89e8-8006c132c5b7@192.168.201.49@tcp:575/0 lens 664/3488 e 0 to 0 dl 1713282060 ref 1 fl Interpret:H/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3155.446365] Lustre: DEBUG MARKER: == recovery-small test 54: back in time ================== 11:40:53 (1713282053) [ 3166.428356] Lustre: Failing over lustre-MDT0000 [ 3166.460793] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3166.461477] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 3166.461480] Lustre: Skipped 3 previous similar messages [ 3166.475757] Lustre: Skipped 4 previous similar messages [ 3166.517881] Lustre: server umount lustre-MDT0000 complete [ 3170.611017] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.49@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3170.618474] LustreError: Skipped 15 previous similar messages [ 3170.667692] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3179.447631] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3179.511763] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3179.633273] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3179.649539] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3179.652131] Lustre: Skipped 1 previous similar message [ 3180.622594] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 3180.627294] Lustre: Skipped 1 previous similar message [ 3180.787275] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3184.636947] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 3184.638813] Lustre: Skipped 3 previous similar messages [ 3184.645935] Lustre: lustre-MDT0000: Recovery over after 0:04, of 3 clients 3 recovered and 0 were evicted. [ 3184.650635] Lustre: Skipped 1 previous similar message [ 3184.674158] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25578 to 0x280000401:25601) [ 3184.674171] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:25513 to 0x2c0000401:25537) [ 3185.494969] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3186.061491] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3191.849757] Lustre: DEBUG MARKER: == recovery-small test 55: ost_brw_read/write drops timed-out read/write request ========================================================== 11:41:29 (1713282089) [ 3195.888880] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3195.890059] Lustre: Skipped 3 previous similar messages [ 3195.891191] LustreError: 24783:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.49@tcp because locking object 0x280000400:63170 took 0 seconds (limit was 11). [ 3195.895532] Lustre: lustre-OST0000: Bulk IO write error with 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp), client will retry: rc = -110 [ 3211.671430] Lustre: lustre-OST0000: Client 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp) reconnecting [ 3211.676171] Lustre: Skipped 2 previous similar messages [ 3211.681463] LustreError: 24783:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.49@tcp because locking object 0x280000400:63169 took 0 seconds (limit was 11). [ 3211.681528] Lustre: lustre-OST0000: Bulk IO write error with 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp), client will retry: rc = -110 [ 3211.681530] Lustre: Skipped 8 previous similar messages [ 3211.699654] LustreError: 24783:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 16 previous similar messages [ 3227.684250] LustreError: 24783:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.49@tcp because locking object 0x280000400:63169 took 0 seconds (limit was 11). [ 3227.684322] Lustre: lustre-OST0000: Bulk IO write error with 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp), client will retry: rc = -110 [ 3227.684324] Lustre: Skipped 8 previous similar messages [ 3227.690974] LustreError: 24783:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3243.706909] LustreError: 1313:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.49@tcp because locking object 0x280000400:63170 took 0 seconds (limit was 11). [ 3243.706976] Lustre: lustre-OST0000: Bulk IO write error with 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp), client will retry: rc = -110 [ 3243.706978] Lustre: Skipped 8 previous similar messages [ 3243.724650] LustreError: 1313:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3259.738186] LustreError: 9204:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.49@tcp because locking object 0x280000400:63169 took 0 seconds (limit was 11). [ 3259.738241] Lustre: lustre-OST0000: Bulk IO write error with 647db76f-4734-4041-89e8-8006c132c5b7 (at 192.168.201.49@tcp), client will retry: rc = -110 [ 3259.738244] Lustre: Skipped 9 previous similar messages [ 3259.755790] LustreError: 9204:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 10 previous similar messages [ 3283.264415] Lustre: DEBUG MARKER: == recovery-small test 56: do not fail on getattr resend ========================================================== 11:43:01 (1713282181) [ 3283.696681] LustreError: 6929:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3323.700852] LustreError: 6929:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 awake [ 3328.198949] Lustre: DEBUG MARKER: == recovery-small test 57: read procfs entries causes kernel crash ========================================================== 11:43:46 (1713282226) [ 3330.070826] Lustre: Failing over lustre-MDT0000 [ 3330.142878] Lustre: server umount lustre-MDT0000 complete [ 3332.969947] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3333.024373] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3333.127513] Lustre: lustre-MDT0000: Aborting client recovery [ 3333.129069] LustreError: 24000:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3333.131910] Lustre: 24030:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3333.134493] Lustre: 24030:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3333.137002] Lustre: 24030:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 3333.140855] Lustre: 24030:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 2 previous similar messages [ 3333.143498] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 3333.147049] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000059f0:0x1:0x0] [ 3333.155809] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 3333.182530] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:25513 to 0x2c0000401:25569) [ 3333.182585] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25603 to 0x280000401:25633) [ 3334.072790] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3338.131217] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 3338.137354] LustreError: Skipped 1 previous similar message [ 3347.609438] Lustre: DEBUG MARKER: == recovery-small test 58: Eviction in the middle of open RPC reply processing ========================================================== 11:44:05 (1713282245) [ 3364.780910] Lustre: 21627:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713282247/real 1713282247] req@ffff88012deec380 x1796503187648576/t0(0) o104->lustre-MDT0000@192.168.201.49@tcp:15/16 lens 328/224 e 0 to 1 dl 1713282263 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 3369.428384] Lustre: DEBUG MARKER: == recovery-small test 59: Read cancel race on client eviction ========================================================== 11:44:27 (1713282267) [ 3380.011053] LustreError: 21589:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.49@tcp) returned error from blocking AST (req@ffff8800a2734000 x1796503187655168 status -107 rc -107), evict it ns: filter-lustre-OST0001_UUID lock: ffff88012ffba640/0x348e488516689f lrc: 4/0,0 mode: PW/PW res: [0x2c0000401:0x63e2:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355c45b40 expref: 5 pid: 21556 timeout: 3479 lvb_type: 0 [ 3380.031776] LustreError: 138-a: lustre-OST0001: A client on nid 192.168.201.49@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3380.037815] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.201.49@tcp ns: filter-lustre-OST0001_UUID lock: ffff88012ffba640/0x348e488516689f lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0x63e2:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355c45b40 expref: 6 pid: 21556 timeout: 0 lvb_type: 0 [ 3380.061282] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 3384.423261] Lustre: DEBUG MARKER: == recovery-small test 60: Add Changelog entries during MDS failover ========================================================== 11:44:42 (1713282282) [ 3384.486135] LustreError: 12734:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.49@tcp) returned error from blocking AST (req@ffff8800a6662680 x1796503187656320 status -107 rc -107), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff8800a20f3600/0x348e48851668bb lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355c45b4e expref: 6 pid: 21627 timeout: 3483 lvb_type: 0 [ 3384.507536] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.201.49@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3384.512026] LustreError: 6921:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.201.49@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff8800a20f3600/0x348e48851668bb lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.49@tcp remote: 0xd45c001355c45b4e expref: 7 pid: 21627 timeout: 0 lvb_type: 0 [ 3385.541841] Lustre: lustre-MDD0000: changelog on [ 3386.584227] Lustre: lustre-MDD0001: changelog on [ 3400.697359] Lustre: lustre-MDT0001: haven't heard from client f4743e1b-248d-4cc0-bf17-33f1161581e7 (at 192.168.201.49@tcp) in 32 seconds. I think it's dead, and I am evicting it. exp ffff88012bf61000, cur 1713282299 expire 1713282269 last 1713282267 [ 3415.665287] Lustre: Failing over lustre-MDT0000 [ 3415.684825] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.49@tcp (stopping) [ 3415.688905] Lustre: Skipped 2 previous similar messages [ 3415.774155] Lustre: server umount lustre-MDT0000 complete [ 3428.344764] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3428.402675] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3428.526025] Lustre: lustre-MDD0000: changelog on [ 3429.331702] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3433.488907] LustreError: 3490:0:(import.c:1314:ptlrpc_connect_interpret()) lustre-MDT0000_UUID: went back in time (transno 60129542151 was previously committed, server now claims 55834582421)! [ 3433.499408] LustreError: 3490:0:(import.c:1316:ptlrpc_connect_interpret()) For further information, see http://doc.lustre.org/lustre_manual.xhtml#went_back_in_time [ 3433.540528] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26855 to 0x2c0000401:26881) [ 3433.540554] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26919 to 0x280000401:26945) [ 3464.593459] Lustre: lustre-MDD0000: changelog off [ 3465.586258] Lustre: lustre-MDD0001: changelog off [ 3471.702775] Lustre: DEBUG MARKER: == recovery-small test 61: Verify to not reuse orphan objects - bug 17025 ========================================================== 11:46:09 (1713282369) [ 3474.666125] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3475.805440] Lustre: Failing over lustre-MDT0000 [ 3475.906849] Lustre: server umount lustre-MDT0000 complete [ 3480.128399] LDISKFS-fs (dm-0): recovery complete [ 3480.131159] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3480.279238] Lustre: lustre-MDT0000: Aborting client recovery [ 3480.280667] LustreError: 31356:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3480.283934] Lustre: 31386:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3480.287149] Lustre: 31386:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3480.289644] Lustre: 31386:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client cdeb3180-968b-44d2-b2f1-3708a4e3764f@ [ 3480.293740] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 3480.297111] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000088d0:0x1:0x0] [ 3480.302294] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000406:0x1:0x0] [ 3480.322982] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26855 to 0x2c0000401:26913) [ 3480.323068] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26919 to 0x280000401:26977) [ 3481.399976] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3485.281254] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 3496.885473] Lustre: DEBUG MARKER: == recovery-small test 65: lock enqueue for destroyed export ========================================================== 11:46:34 (1713282394) [ 3497.393210] LustreError: 12051:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3497.398243] Lustre: *** cfs_fail_loc=31e, val=0*** [ 3497.398245] Lustre: Skipped 3 previous similar messages [ 3499.399925] LustreError: 18824:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3501.732063] Lustre: 32748:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting cdeb3180-968b-44d2-b2f1-3708a4e3764f at adminstrative request [ 3501.743416] LustreError: 9214:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 3503.404837] LustreError: 12051:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e awake [ 3503.409685] LustreError: 12051:0:(ldlm_lockd.c:1499:ldlm_handle_enqueue()) ### lock on destroyed export ffff88008a74b800 ns: filter-lustre-OST0000_UUID lock: ffff8801346b9680/0x348e48851ccec2 lrc: 3/0,0 mode: --/PW res: [0x280000401:0x6963:0x0].0x0 rrc: 4 type: EXT [0->4095] (req 0->4095) gid 0 flags: 0x70000000020020 nid: 192.168.201.49@tcp remote: 0xd45c001355c52671 expref: 3 pid: 12051 timeout: 0 lvb_type: 0 [ 3504.105780] LustreError: 18824:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout interrupted [ 3512.999636] Lustre: lustre-OST0000: Client c1774ea0-9f44-42ca-ba34-ee5a94a7733c (at 192.168.201.49@tcp) reconnecting [ 3513.005323] Lustre: Skipped 6 previous similar messages [ 3517.696978] Lustre: DEBUG MARKER: == recovery-small test 66: lock enqueue re-send vs client eviction ========================================================== 11:46:55 (1713282415) [ 3518.291300] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3518.294112] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008fbd5c00 x1796503248555136/t0(0) o101->cdeb3180-968b-44d2-b2f1-3708a4e3764f@192.168.201.49@tcp:232/0 lens 576/688 e 0 to 0 dl 1713282472 ref 1 fl Interpret:/200/0 rc 0/0 job:'stat.0' uid:0 gid:0 [ 3520.232701] LustreError: 6931:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3522.567980] Lustre: 1241:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting cdeb3180-968b-44d2-b2f1-3708a4e3764f at adminstrative request [ 3523.037821] LustreError: 6931:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout interrupted [ 3523.042304] LustreError: 6931:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) Skipped 1 previous similar message [ 3527.474805] Lustre: DEBUG MARKER: == recovery-small test 67: connect vs import invalidate race ========================================================== 11:47:05 (1713282425) [ 3529.864253] Lustre: 2021:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting cdeb3180-968b-44d2-b2f1-3708a4e3764f at adminstrative request [ 3546.036127] Lustre: DEBUG MARKER: == recovery-small test 100: IR: Make sure normal recovery still works w/o IR ========================================================== 11:47:23 (1713282443) [ 3547.494972] Lustre: Failing over lustre-OST0000 [ 3547.528569] Lustre: server umount lustre-OST0000 complete [ 3548.908161] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3560.289143] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3560.296257] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3562.240403] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3566.292794] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3566.876101] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3573.213770] Lustre: DEBUG MARKER: == recovery-small test 101a: IR: Make sure IR works w/o normal recovery ========================================================== 11:47:51 (1713282471) [ 3574.584185] Lustre: Failing over lustre-OST0000 [ 3574.608672] Lustre: server umount lustre-OST0000 complete [ 3587.405809] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3587.411840] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3587.515401] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3587.537568] mount.lustre (5793) used greatest stack depth: 10032 bytes left [ 3589.341637] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3592.133153] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3592.713807] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3599.254480] Lustre: DEBUG MARKER: == recovery-small test 101b: IR: Make sure IR works w/o normal recovery and proceed EAGAIN ========================================================== 11:48:17 (1713282497) [ 3600.966224] Lustre: Failing over lustre-OST0000 [ 3600.982803] Lustre: server umount lustre-OST0000 complete [ 3613.875248] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3613.881243] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3613.977523] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3613.987541] LustreError: 8126:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 sleeping for 25000ms [ 3638.991852] LustreError: 8126:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 awake [ 3640.753682] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3643.592486] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3644.160504] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3649.967975] Lustre: DEBUG MARKER: == recovery-small test 102: IR: New client gets updated nidtbl after MGS restart ========================================================== 11:49:07 (1713282547) [ 3651.372934] Lustre: Failing over lustre-OST0000 [ 3651.398305] Lustre: server umount lustre-OST0000 complete [ 3664.285583] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3664.291561] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3664.399241] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3666.209644] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3669.048973] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3669.625075] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3672.721500] Lustre: Failing over lustre-MDT0000 [ 3672.810451] Lustre: server umount lustre-MDT0000 complete [ 3675.569995] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3675.628781] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3675.633458] LustreError: Skipped 1 previous similar message [ 3676.929596] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3678.205441] Lustre: Failing over lustre-OST0000 [ 3678.224905] Lustre: server umount lustre-OST0000 complete [ 3680.779925] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26915 to 0x2c0000401:26945) [ 3691.105456] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3691.108764] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3692.439492] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26981 to 0x280000401:27009) [ 3693.013359] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3695.908247] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3702.285069] Lustre: DEBUG MARKER: == recovery-small test 103: IR: MDS can start w/o MGS and get updated nidtbl later ========================================================== 11:50:00 (1713282600) [ 3703.227138] Lustre: DEBUG MARKER: SKIP: recovery-small test_103 needs separate mgs and mds [ 3706.152970] Lustre: DEBUG MARKER: == recovery-small test 104: IR: ost can disable IR voluntarily ========================================================== 11:50:04 (1713282604) [ 3707.536906] Lustre: Failing over lustre-OST0000 [ 3707.563889] Lustre: server umount lustre-OST0000 complete [ 3710.750856] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3710.758643] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3712.686280] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3720.154013] Lustre: DEBUG MARKER: == recovery-small test 105: IR: NON IR clients support === 11:50:18 (1713282618) [ 3720.718711] Lustre: DEBUG MARKER: SKIP: recovery-small test_105 Needs multiple clients [ 3723.659873] Lustre: DEBUG MARKER: == recovery-small test 106: lightweight connection support ========================================================== 11:50:21 (1713282621) [ 3727.548599] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3728.315805] Lustre: Failing over lustre-MDT0000 [ 3728.404626] Lustre: server umount lustre-MDT0000 complete [ 3730.843852] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3730.848884] LustreError: Skipped 1 previous similar message [ 3742.636657] LDISKFS-fs (dm-0): recovery complete [ 3742.639297] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3743.862165] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3747.790345] LustreError: 18917:0:(ldlm_lockd.c:968:ldlm_server_blocking_ast()) ### BUG 6063: lock collide during recovery ns: mdt-lustre-MDT0000_UUID lock: ffff8801346b8240/0x348e48851cdbc6 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x40200000000020 nid: 192.168.201.49@tcp remote: 0xd45c001355c52a68 expref: 7 pid: 21627 timeout: 0 lvb_type: 0 [ 3747.843639] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27011 to 0x280000401:27041) [ 3747.843775] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26915 to 0x2c0000401:26977) [ 3753.241686] Lustre: DEBUG MARKER: == recovery-small test 107: drop reint reply, then restart MDT ========================================================== 11:50:51 (1713282651) [ 3753.602478] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 3753.603639] LustreError: 21627:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a63fc380 x1796503248590848/t81604378628(0) o36->9467b408-d531-4462-aec5-64fbe90af9d6@192.168.201.49@tcp:467/0 lens 552/448 e 0 to 0 dl 1713282707 ref 1 fl Interpret:/200/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3754.545207] Lustre: Failing over lustre-MDT0000 [ 3754.634305] Lustre: server umount lustre-MDT0000 complete [ 3767.472941] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3768.695380] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 3772.635008] Lustre: 10236:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880096101f80 x1796503248590848/t81604378628(0) o36->9467b408-d531-4462-aec5-64fbe90af9d6@192.168.201.49@tcp:486/0 lens 552/2880 e 0 to 0 dl 1713282726 ref 1 fl Interpret:/202/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3772.655821] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27011 to 0x280000401:27073) [ 3772.655845] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26915 to 0x2c0000401:27009) [ 3773.492223] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3774.069545] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3779.974112] Lustre: DEBUG MARKER: == recovery-small test 108: client eviction don't crash == 11:51:17 (1713282677) [ 3780.369782] Lustre: 22073:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 9467b408-d531-4462-aec5-64fbe90af9d6 at adminstrative request [ 3790.431805] Lustre: DEBUG MARKER: == recovery-small test 110a: create remote directory: drop client req ========================================================== 11:51:28 (1713282688) [ 3791.527748] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 3791.530978] Lustre: Skipped 46 previous similar messages [ 3852.548170] Lustre: lustre-MDT0000: Client 9467b408-d531-4462-aec5-64fbe90af9d6 (at 192.168.201.49@tcp) reconnecting [ 3852.552349] Lustre: Skipped 2 previous similar messages [ 3857.686619] Lustre: DEBUG MARKER: == recovery-small test 110b: create remote directory: drop Master rep ========================================================== 11:52:35 (1713282755) [ 3858.113040] LustreError: 8076:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a07a5c00 x1796503248605056/t4295369251(0) o36->9467b408-d531-4462-aec5-64fbe90af9d6@192.168.201.49@tcp:571/0 lens 560/536 e 0 to 0 dl 1713282811 ref 1 fl Interpret:/200/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 3918.092187] Lustre: 10236:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a2734380 x1796503248605056/t4295369251(0) o36->9467b408-d531-4462-aec5-64fbe90af9d6@192.168.201.49@tcp:631/0 lens 560/2880 e 0 to 0 dl 1713282871 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 3921.149623] Lustre: DEBUG MARKER: == recovery-small test 110c: create remote directory: drop update rep on slave MDT ========================================================== 11:53:39 (1713282819) [ 3937.405899] Lustre: 8074:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713282819/real 1713282819] req@ffff88009132c700 x1796503188166144/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 264/4320 e 0 to 1 dl 1713282835 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 3937.411349] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3937.414145] Lustre: Skipped 38 previous similar messages [ 3937.415987] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 3937.418285] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 3937.420473] Lustre: Skipped 38 previous similar messages [ 3940.436187] Lustre: DEBUG MARKER: == recovery-small test 110d: remove remote directory: drop client req ========================================================== 11:53:58 (1713282838) [ 3940.700699] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 4003.737619] Lustre: DEBUG MARKER: == recovery-small test 110e: remove remote directory: drop master rep ========================================================== 11:55:01 (1713282901) [ 4004.041618] LustreError: 10236:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880129f18850 x1796503248620736/t4295369270(0) o36->9467b408-d531-4462-aec5-64fbe90af9d6@192.168.201.49@tcp:717/0 lens 496/456 e 0 to 0 dl 1713282957 ref 1 fl Interpret:/200/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 4004.046730] LustreError: 10236:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 4064.044502] Lustre: 21627:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800995a3100 x1796503248620736/t4295369270(0) o36->9467b408-d531-4462-aec5-64fbe90af9d6@192.168.201.49@tcp:22/0 lens 496/2888 e 0 to 0 dl 1713283017 ref 1 fl Interpret:/202/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 4068.826123] Lustre: DEBUG MARKER: == recovery-small test 110f: remove remote directory: drop slave rep ========================================================== 11:56:06 (1713282966) [ 4069.347356] Lustre: *** cfs_fail_loc=1701, val=2147483648*** [ 4069.350241] Lustre: Skipped 3 previous similar messages [ 4085.345919] Lustre: 8074:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713282967/real 1713282967] req@ffff880089bd8a80 x1796503188211840/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1792/4320 e 0 to 1 dl 1713282983 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4085.361445] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4090.297611] Lustre: DEBUG MARKER: == recovery-small test 110g: drop reply during migration ========================================================== 11:56:28 (1713282988) [ 4154.950142] Lustre: DEBUG MARKER: == recovery-small test 110h: drop update reply during cross-MDT file rename ========================================================== 11:57:32 (1713283052) [ 4171.508930] Lustre: 8074:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713283053/real 1713283053] req@ffff88006f79bb80 x1796503188241792/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1816/4320 e 0 to 1 dl 1713283069 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4171.524135] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4176.322384] Lustre: DEBUG MARKER: == recovery-small test 110i: drop update reply during cross-MDT dir rename ========================================================== 11:57:54 (1713283074) [ 4192.857996] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4197.732409] Lustre: DEBUG MARKER: == recovery-small test 110j: drop update reply during cross-MDT ln ========================================================== 11:58:15 (1713283095) [ 4214.236961] Lustre: 8074:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713283096/real 1713283096] req@ffff880099421180 x1796503188259008/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 488/4320 e 0 to 1 dl 1713283112 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4214.251830] Lustre: 8074:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 4214.257666] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4219.425156] Lustre: DEBUG MARKER: == recovery-small test 110k: FID_QUERY failed during recovery ========================================================== 11:58:37 (1713283117) [ 4220.264902] Lustre: Failing over lustre-MDT0001 [ 4220.376119] Lustre: server umount lustre-MDT0001 complete [ 4223.323857] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 4223.332359] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4223.343632] LustreError: Skipped 97 previous similar messages [ 4223.959329] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4224.112699] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 4224.121563] Lustre: *** cfs_fail_loc=1103, val=0*** [ 4224.126665] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 4224.126942] Lustre: lustre-MDT0001: Aborting client recovery [ 4224.126948] LustreError: 31732:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0001: Aborting recovery [ 4224.139111] Lustre: Skipped 16 previous similar messages [ 4224.142137] Lustre: 31754:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4224.147851] Lustre: 31754:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 4226.142275] LustreError: 31753:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0000-osp-MDT0001: get update log duration 2, retries 0, failed: rc = -108 [ 4226.148702] Lustre: 31754:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0001: disconnect stale client lustre-MDT0000-mdtlov_UUID@ [ 4226.155227] Lustre: 31754:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 4226.159581] Lustre: lustre-MDT0001: disconnecting 1 stale clients [ 4226.163105] Lustre: 31754:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4226.169604] Lustre: lustre-MDT0001-osd: cancel update llog [0x240000400:0x1:0x0] [ 4226.177067] Lustre: lustre-MDT0000-osp-MDT0001: cancel update llog [0x200000401:0x1:0x0] [ 4226.214041] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:64388 to 0x280000400:64609) [ 4226.214756] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:2630 to 0x2c0000402:2689) [ 4226.227945] mount.lustre (31732) used greatest stack depth: 9888 bytes left [ 4227.391276] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4229.121025] LustreError: 167-0: lustre-MDT0001-osp-MDT0000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. [ 4229.122038] Lustre: Failing over lustre-MDT0001 [ 4229.133612] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 4229.206309] Lustre: server umount lustre-MDT0001 complete [ 4232.428523] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4232.592886] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 4233.747604] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4234.621824] Lustre: lustre-MDT0001: Will be in recovery for at least 1:00, or until 1 client reconnects [ 4234.628263] Lustre: Skipped 10 previous similar messages [ 4234.631589] Lustre: lustre-MDT0001: Denying connection for new client da055218-4f1d-4cf9-a3fc-1f07906fb715 (at 192.168.201.49@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 4237.603952] Lustre: lustre-MDT0001: Recovery over after 0:03, of 1 clients 1 recovered and 0 were evicted. [ 4237.608526] Lustre: Skipped 10 previous similar messages [ 4237.633530] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:64388 to 0x280000400:64641) [ 4237.633533] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:2630 to 0x2c0000402:2721) [ 4250.562034] Lustre: DEBUG MARKER: == recovery-small test 110m: update resent vs original RPC race ========================================================== 11:59:08 (1713283148) [ 4251.370289] LustreError: 8086:0:(out_handler.c:1172:out_handle()) cfs_race id 525 sleeping [ 4255.292927] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4255.299498] LustreError: 27595:0:(service.c:1855:ptlrpc_server_request_add()) cfs_fail_race id 525 waking [ 4255.304066] LustreError: 8086:0:(out_handler.c:1172:out_handle()) cfs_fail_race id 525 awake: rc=1071 [ 4259.306528] LustreError: 27595:0:(out_handler.c:1172:out_handle()) cfs_fail_race id 525 waking [ 4263.921625] Lustre: DEBUG MARKER: == recovery-small test 111: mdd setup fail should not cause umount oops ========================================================== 11:59:21 (1713283161) [ 4264.927455] Lustre: Failing over lustre-MDT0000 [ 4264.935581] LustreError: 14946:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713283163 with bad export cookie 14793140911014344 [ 4264.937021] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4265.015280] Lustre: server umount lustre-MDT0000 complete [ 4268.056602] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4268.115191] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4268.120372] LustreError: Skipped 2 previous similar messages [ 4268.213744] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4268.216515] Lustre: Skipped 9 previous similar messages [ 4268.226447] Lustre: *** cfs_fail_loc=151, val=0*** [ 4268.227867] LustreError: 3514:0:(mdd_device.c:687:mdd_changelog_init()) lustre-MDD0000: changelog setup during init failed: rc = -5 [ 4268.231194] LustreError: 3514:0:(mdd_device.c:1402:mdd_prepare()) lustre-MDD0000: failed to initialize changelog: rc = -5 [ 4268.234477] LustreError: 3514:0:(tgt_mount.c:2223:server_fill_super()) Unable to start targets: -5 [ 4268.238145] Lustre: Failing over lustre-MDT0000 [ 4268.239977] LustreError: 3544:0:(llog_osd.c:983:llog_osd_next_block()) lustre-MDT0001-osp-MDT0000: can't read llog block from log [0x240000408:0x1:0x0] offset 32768: rc = -5 [ 4268.244458] LustreError: 3544:0:(llog.c:805:llog_process_thread()) lustre-MDT0001-osp-MDT0000 retry remote llog process [ 4268.249336] LustreError: 3544:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 0, retries 0, failed: rc = -11 [ 4268.331237] Lustre: server umount lustre-MDT0000 complete [ 4268.332838] LustreError: 3514:0:(super25.c:189:lustre_fill_super()) llite: Unable to mount : rc = -5 [ 4271.058021] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4271.116206] LustreError: 4066:0:(ldlm_resource.c:1128:ldlm_resource_complain()) MGC192.168.201.149@tcp: namespace resource [0x65727473756c:0x0:0x0].0x0 (ffff8800a20e5900) refcount nonzero (1) after lock cleanup; forcing cleanup. [ 4271.123255] LustreError: 6928:0:(mgc_request.c:627:do_requeue()) failed processing log: -5 [ 4272.292260] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4276.257538] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27012 to 0x2c0000401:27041) [ 4276.257546] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27077 to 0x280000401:27105) [ 4277.577951] Lustre: DEBUG MARKER: == recovery-small test 112a: bulk resend while orignal request is in progress ========================================================== 11:59:35 (1713283175) [ 4278.130290] LustreError: 18679:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 20000ms [ 4298.134833] LustreError: 18679:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 4303.232823] Lustre: DEBUG MARKER: == recovery-small test 115a: read: late REQ MDunlink and no bulk ========================================================== 12:00:01 (1713283201) [ 4311.995702] Lustre: DEBUG MARKER: == recovery-small test 115b: write: late REQ MDunlink and no bulk ========================================================== 12:00:09 (1713283209) [ 4316.106217] Lustre: *** cfs_fail_loc=215, val=2*** [ 4320.614422] Lustre: DEBUG MARKER: == recovery-small test 115c: read: late Reply MDunlink and no bulk ========================================================== 12:00:18 (1713283218) [ 4326.722809] Lustre: DEBUG MARKER: == recovery-small test 115d: write: late Reply MDunlink and no bulk ========================================================== 12:00:24 (1713283224) [ 4332.819416] Lustre: DEBUG MARKER: == recovery-small test 115e: read: late Bulk MDunlink and no reply ========================================================== 12:00:30 (1713283230) [ 4334.780969] LustreError: 17320:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a206df80 x1796503248669056/t0(0) o400->da055218-4f1d-4cf9-a3fc-1f07906fb715@192.168.201.49@tcp:249/0 lens 224/224 e 0 to 0 dl 1713283244 ref 1 fl Interpret:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 4334.797628] LustreError: 17320:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 5 previous similar messages [ 4338.818877] Lustre: DEBUG MARKER: == recovery-small test 115f: read: late REQ MDunlink and no reply ========================================================== 12:00:36 (1713283236) [ 4347.424270] Lustre: DEBUG MARKER: == recovery-small test 115g: read: late REQ MDunlink and Reply MDunlink ========================================================== 12:00:45 (1713283245) [ 4357.395970] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713283239/real 1713283239] req@ffff88012fe2b100 x1796503188314240/t0(0) o400->lustre-OST0001-osc-MDT0001@0@lo:28/4 lens 224/224 e 0 to 1 dl 1713283255 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 4410.542266] Lustre: DEBUG MARKER: == recovery-small test 120: flock race: completion vs. evict ========================================================== 12:01:48 (1713283308) [ 4412.958113] Lustre: 11113:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4418.996360] Lustre: 11182:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4427.031163] Lustre: 11252:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4431.045643] Lustre: 11319:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4439.657687] Lustre: 11392:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4451.782295] Lustre: 11530:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4451.788817] Lustre: 11530:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 4475.287905] Lustre: DEBUG MARKER: == recovery-small test 113: ldlm enqueue dropped reply should not cause deadlocks ========================================================== 12:02:53 (1713283373) [ 4505.833563] Lustre: lustre-MDT0000: Client da055218-4f1d-4cf9-a3fc-1f07906fb715 (at 192.168.201.49@tcp) reconnecting [ 4505.839871] Lustre: Skipped 5 previous similar messages [ 4514.383782] Lustre: DEBUG MARKER: == recovery-small test 130a: enqueue resend on not existing file ========================================================== 12:03:32 (1713283412) [ 4514.940424] LustreError: 8076:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4524.945855] LustreError: 8076:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4549.447310] Lustre: DEBUG MARKER: == recovery-small test 130b: enqueue resend on a stale inode ========================================================== 12:04:07 (1713283447) [ 4550.146696] LustreError: 12734:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4560.151880] LustreError: 12734:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4580.151418] Lustre: *** cfs_fail_loc=217, val=0*** [ 4583.749357] Lustre: DEBUG MARKER: == recovery-small test 130c: layout intent resend on a stale inode ========================================================== 12:04:41 (1713283481) [ 4586.151683] LustreError: 10236:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4596.157845] LustreError: 10236:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4610.357969] Lustre: DEBUG MARKER: == recovery-small test 132: long punch =================== 12:05:08 (1713283508) [ 4610.729446] LustreError: 18679:0:(ofd_dev.c:2089:ofd_punch_hdl()) cfs_fail_timeout id 236 sleeping for 120000ms [ 4682.730876] Lustre: ll_ost_io00_004: service thread pid 18679 was inactive for 72.001 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4682.736207] Pid: 18679, comm: ll_ost_io00_004 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4682.738454] Call Trace: [ 4682.738960] [<0>] __cfs_fail_timeout_set+0xe9/0x210 [libcfs] [ 4682.741218] [<0>] ofd_punch_hdl+0xa8c/0xb40 [ofd] [ 4682.743323] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4682.745458] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4682.747840] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4682.749684] [<0>] kthread+0xe4/0xf0 [ 4682.751119] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4682.752744] [<0>] 0xfffffffffffffffe [ 4730.733788] LustreError: 18679:0:(ofd_dev.c:2089:ofd_punch_hdl()) cfs_fail_timeout id 236 awake [ 4734.366382] Lustre: DEBUG MARKER: == recovery-small test 131: IO vs evict results to IO under staled lock ========================================================== 12:07:12 (1713283632) [ 4735.989422] Lustre: 16613:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting da055218-4f1d-4cf9-a3fc-1f07906fb715 at adminstrative request [ 4735.993737] Lustre: 16613:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 2 previous similar messages [ 4735.996742] LustreError: 6919:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 4738.899829] LustreError: 6919:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout interrupted [ 4741.621681] Lustre: DEBUG MARKER: == recovery-small test 133: don't fail on flock resend === 12:07:19 (1713283639) [ 4742.967146] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 4742.968707] Lustre: Skipped 9 previous similar messages [ 4784.637166] Lustre: DEBUG MARKER: == recovery-small test 134: race between failover and search for reply data free slot ========================================================== 12:08:02 (1713283682) [ 4784.993176] Lustre: DEBUG MARKER: SKIP: recovery-small test_134 Need 2+ clients, have 1 [ 4786.846432] Lustre: DEBUG MARKER: == recovery-small test 135: DOM: open/create resend to return size ========================================================== 12:08:04 (1713283684) [ 4817.208408] Lustre: 8076:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a2191f80 x1796503248748224/t12884901906(0) o101->da055218-4f1d-4cf9-a3fc-1f07906fb715@192.168.201.49@tcp:745/0 lens 648/3488 e 0 to 0 dl 1713283740 ref 1 fl Interpret:/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 4817.213889] Lustre: 8076:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 4819.657105] Lustre: DEBUG MARKER: SKIP: recovery-small test_136 skipping excluded test 136 [ 4820.937488] Lustre: DEBUG MARKER: == recovery-small test 137: late resend must be skipped if already applied ========================================================== 12:08:39 (1713283719) [ 4822.372360] LustreError: 6930:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_race id 525 sleeping [ 4827.376845] LustreError: 6930:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 awake: rc=0 [ 4827.393008] LustreError: 6930:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 waking [ 4853.433598] Lustre: DEBUG MARKER: == recovery-small test 138: Umount MDT during recovery === 12:09:11 (1713283751) [ 4854.559312] Lustre: Failing over lustre-MDT0000 [ 4854.569827] LustreError: 20272:0:(lod_dev.c:1129:lod_process_config()) cfs_fail_timeout id 724 sleeping for 10000ms [ 4857.148325] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4857.149504] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4857.159368] Lustre: Skipped 16 previous similar messages [ 4864.574999] LustreError: 20272:0:(lod_dev.c:1129:lod_process_config()) cfs_fail_timeout id 724 awake [ 4864.674224] Lustre: server umount lustre-MDT0000 complete [ 4867.163667] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4867.166747] LustreError: Skipped 8 previous similar messages [ 4877.656252] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4877.718466] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4877.725849] LustreError: Skipped 1 previous similar message [ 4877.823703] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4877.826115] Lustre: Skipped 1 previous similar message [ 4877.841698] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4877.843944] Lustre: Skipped 4 previous similar messages [ 4878.940895] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4882.831388] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 4882.837142] Lustre: Skipped 15 previous similar messages [ 4882.934875] LustreError: 20849:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 awake [ 4918.540897] LustreError: 20849:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 awake [ 4918.546133] LustreError: 20849:0:(lod_dev.c:475:lod_sub_recovery_thread()) Skipped 6 previous similar messages [ 4935.376377] Lustre: Failing over lustre-MDT0000 [ 4937.844832] Lustre: 20850:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4937.849994] Lustre: 20850:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 4937.915793] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4937.919040] Lustre: Skipped 10 previous similar messages [ 4938.950862] LustreError: 20849:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 61, retries 11, failed: rc = -5 [ 4938.958959] Lustre: 20850:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4938.983235] Lustre: 20850:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 4945.572977] Lustre: server umount lustre-MDT0000 complete [ 4948.850236] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4949.980955] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4950.623827] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 4950.626327] Lustre: Skipped 1 previous similar message [ 4950.627459] Lustre: lustre-MDT0000: Denying connection for new client e2bd754c-0a2f-40b0-b3a8-1ab4363f5759 (at 192.168.201.49@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 4954.006135] Lustre: lustre-MDT0000: Recovery over after 0:03, of 1 clients 1 recovered and 0 were evicted. [ 4954.010593] Lustre: Skipped 1 previous similar message [ 4954.033508] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27120 to 0x280000401:27137) [ 4954.033518] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27054 to 0x2c0000401:27073) [ 4959.310708] Lustre: DEBUG MARKER: == recovery-small test 139: corrupted catid won't cause crash ========================================================== 12:10:57 (1713283857) [ 4959.982470] Lustre: Failing over lustre-MDT0000 [ 4960.057986] Lustre: server umount lustre-MDT0000 complete [ 4962.707427] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4962.788668] Lustre: *** cfs_fail_loc=2106, val=104*** [ 4962.789919] LustreError: 23934:0:(osp_sync.c:1415:osp_sync_llog_init()) lustre-OST0000-osc-MDT0000: the catid [0x0:0x68:0x0] for init llog 0 is bad [ 4963.536408] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4967.495943] Lustre: DEBUG MARKER: == recovery-small test 140a: local mount is flagged properly ========================================================== 12:11:05 (1713283865) [ 4967.837778] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27054 to 0x2c0000401:27105) [ 4967.838261] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27120 to 0x280000401:27169) [ 4968.517438] Lustre: lustre-MDT0000: local client 78412ac1-1ceb-452c-8bcc-8012f53f7f6d w/o recovery [ 4968.529539] Lustre: Mounted lustre-client [ 4969.084864] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4970.185777] Lustre: Unmounted lustre-client [ 4971.133465] Lustre: Mounted lustre-client [ 4971.625677] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4972.652905] Lustre: Unmounted lustre-client [ 4976.972724] Lustre: DEBUG MARKER: == recovery-small test 140b: local mount is excluded from recovery ========================================================== 12:11:14 (1713283874) [ 4978.059620] Lustre: lustre-MDT0000: local client 97e9ee0f-8c96-4c08-b994-20539edf0bae w/o recovery [ 4978.061539] Lustre: Skipped 2 previous similar messages [ 4978.066428] Lustre: Mounted lustre-client [ 4978.738949] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 4981.240183] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4982.267863] Lustre: Unmounted lustre-client [ 4983.282521] Lustre: Failing over lustre-MDT0000 [ 4983.361557] Lustre: server umount lustre-MDT0000 complete [ 4987.867700] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4987.872238] LustreError: Skipped 2 previous similar messages [ 4997.488521] LDISKFS-fs (dm-0): recovery complete [ 4997.490780] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4998.685992] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5002.642037] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27120 to 0x280000401:27201) [ 5002.642102] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27054 to 0x2c0000401:27137) [ 5003.412093] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5003.996220] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5010.467519] Lustre: DEBUG MARKER: == recovery-small test 141: do not lose locks on MGS restart ========================================================== 12:11:48 (1713283908) [ 5011.388887] Lustre: DEBUG MARKER: SKIP: recovery-small test_141 cannot run in local mode or from build tree [ 5014.269785] Lustre: DEBUG MARKER: == recovery-small test 142: orphan name stub can be cleaned up in startup ========================================================== 12:11:52 (1713283912) [ 5014.634335] Lustre: *** cfs_fail_loc=165, val=0*** [ 5015.302203] Lustre: Failing over lustre-MDT0000 [ 5015.385029] Lustre: server umount lustre-MDT0000 complete [ 5018.312665] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5019.035172] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5023.436510] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27120 to 0x280000401:27233) [ 5023.436525] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27139 to 0x2c0000401:27169) [ 5023.437269] LustreError: 433:0:(osd_handler.c:297:osd_idc_find_or_init()) can't lookup: rc = -2 [ 5024.401166] Lustre: DEBUG MARKER: == recovery-small test 143: orphan cleanup thread shouldn't be blocked even delete failed ========================================================== 12:12:02 (1713283922) [ 5025.082098] Lustre: Failing over lustre-MDT0000 [ 5025.165024] Lustre: server umount lustre-MDT0000 complete [ 5027.525653] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 5030.488225] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5031.402494] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5032.630278] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5035.628082] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27139 to 0x2c0000401:27201) [ 5035.628110] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27120 to 0x280000401:27265) [ 5042.358151] Lustre: DEBUG MARKER: == recovery-small test 144a: MDT failover should stop precreation threads ========================================================== 12:12:20 (1713283940) [ 5044.453640] Lustre: Failing over lustre-OST0000 [ 5044.492360] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5044.494268] Lustre: Skipped 4 previous similar messages [ 5044.521888] Lustre: server umount lustre-OST0000 complete [ 5057.436293] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5057.442480] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5059.384549] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5062.116744] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5062.538144] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5124.562933] Lustre: Failing over lustre-MDT0000 [ 5124.811414] Lustre: server umount lustre-MDT0000 complete [ 5136.992670] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5138.289254] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5142.168368] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:52362 to 0x280000401:52385) [ 5142.168386] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:52106 to 0x2c0000401:52129) [ 5142.730896] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5143.166382] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5145.222086] Lustre: Failing over lustre-MDT0000 [ 5145.317249] Lustre: server umount lustre-MDT0000 complete [ 5158.289663] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5159.592548] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5163.484854] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:52106 to 0x2c0000401:52161) [ 5163.485062] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:52362 to 0x280000401:52417) [ 5164.306598] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5164.893373] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5181.722651] Lustre: DEBUG MARKER: == recovery-small test 144b: orphan cleanup shouldn't be blocked for no objects+failover situation ========================================================== 12:14:39 (1713284079) [ 5183.894920] Lustre: Failing over lustre-OST0000 [ 5184.000468] Lustre: lustre-OST0000: Not available for connect from 192.168.201.49@tcp (stopping) [ 5184.343084] Lustre: server umount lustre-OST0000 complete [ 5197.064960] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5197.073664] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5198.733340] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5201.977895] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5202.790283] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5203.609630] LustreError: 8076:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x51:0x0]: have 494 want 1000 [ 5204.181333] LustreError: 6930:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x55:0x0]: have 494 want 1000 [ 5204.183744] LustreError: 6930:0:(lod_qos.c:1401:lod_ost_alloc_specific()) Skipped 3 previous similar messages [ 5205.205077] LustreError: 6930:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x61:0x0]: have 494 want 1000 [ 5205.207511] LustreError: 6930:0:(lod_qos.c:1401:lod_ost_alloc_specific()) Skipped 11 previous similar messages [ 5274.199407] Lustre: DEBUG MARKER: == recovery-small test 144c: reconnection during orphan cleanup shouldn't lose LAST_ID synchronization ========================================================== 12:16:12 (1713284172) [ 5275.967107] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x280000401 to 0x280000bd0 [ 5305.400046] Lustre: Failing over lustre-MDT0000 [ 5305.920024] Lustre: server umount lustre-MDT0000 complete [ 5309.074591] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5310.214773] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5311.533932] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5314.252583] LustreError: 3368:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout id 254 sleeping for 5000ms [ 5314.257097] LustreError: 3368:0:(ofd_dev.c:1523:ofd_create_hdl()) Skipped 14 previous similar messages [ 5317.897164] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 5317.901314] Lustre: Skipped 5 previous similar messages [ 5318.251821] LustreError: 3377:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout interrupted [ 5318.255823] LustreError: 3377:0:(ofd_dev.c:1528:ofd_create_hdl()) lustre-OST0000: dropping old orphan cleanup request [ 5318.260836] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:65169 to 0x2c0000401:65536) [ 5318.260959] LustreError: 11227:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -116 [ 5318.373620] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x2c0000401 to 0x2c0000403 [ 5319.261384] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8876 to 0x280000bd0:8961) [ 5334.082286] Lustre: DEBUG MARKER: == recovery-small test 145: connect mdtlovs and process update logs after recovery expire ========================================================== 12:17:12 (1713284232) [ 5334.404468] Lustre: DEBUG MARKER: SKIP: recovery-small test_145 needs >= 3 MDTs [ 5337.067375] Lustre: DEBUG MARKER: == recovery-small test 146: test eviction is counted properly ========================================================== 12:17:14 (1713284234) [ 5337.804882] Lustre: 13305:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting e2bd754c-0a2f-40b0-b3a8-1ab4363f5759 at adminstrative request [ 5342.597119] Lustre: DEBUG MARKER: == recovery-small test 147: Check client reconnect ======= 12:17:20 (1713284240) [ 5343.343539] Lustre: *** cfs_fail_loc=225, val=0*** [ 5499.419786] Lustre: lustre-OST0000: haven't heard from client e2bd754c-0a2f-40b0-b3a8-1ab4363f5759 (at 192.168.201.49@tcp) in 156 seconds. I think it's dead, and I am evicting it. exp ffff88009d78a800, cur 1713284397 expire 1713284367 last 1713284241 [ 5499.429417] Lustre: Skipped 1 previous similar message [ 5511.131049] Lustre: DEBUG MARKER: == recovery-small test 148: data corruption through resend ========================================================== 12:20:09 (1713284409) [ 5523.405617] Lustre: lustre-MDT0001: haven't heard from client lustre-MDT0001-lwp-OST0001_UUID (at 0@lo) in 33 seconds. I think it's dead, and I am evicting it. exp ffff880099532000, cur 1713284421 expire 1713284391 last 1713284388 [ 5538.668719] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5538.668878] LustreError: 166-1: MGC192.168.201.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5538.668879] LustreError: Skipped 8 previous similar messages [ 5538.670836] Lustre: lustre-MDT0001-lwp-OST0000: Connection restored to 192.168.201.149@tcp (at 0@lo) [ 5538.670837] Lustre: Skipped 39 previous similar messages [ 5538.670923] Lustre: Evicted from MGS (at 192.168.201.149@tcp) after server handle changed from 0x348e48852a5465 to 0x348e48852a6616 [ 5538.682768] Lustre: Skipped 41 previous similar messages [ 5540.349783] LustreError: 18679:0:(tgt_handler.c:2880:tgt_brw_write()) cfs_fail_timeout id 227 awake [ 5540.353575] LustreError: 18679:0:(tgt_handler.c:2880:tgt_brw_write()) Skipped 5 previous similar messages [ 5547.124798] Lustre: DEBUG MARKER: == recovery-small test 149: skip orphan removal at umount ========================================================== 12:20:45 (1713284445) [ 5548.304925] Lustre: lustre-MDT0001: Not available for connect from 192.168.201.49@tcp (stopping) [ 5554.329204] Lustre: server umount lustre-MDT0001 complete [ 5558.317255] LustreError: 137-5: lustre-MDT0001: not available for connect from 192.168.201.49@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5558.325244] LustreError: Skipped 78 previous similar messages [ 5558.384884] Lustre: server umount lustre-MDT0000 complete [ 5560.699226] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5560.936586] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5560.941536] Lustre: Skipped 10 previous similar messages [ 5560.964172] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8965 to 0x280000bd0:8993) [ 5560.967140] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:33) [ 5562.067298] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5564.638252] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5564.806551] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:2630 to 0x2c0000402:2753) [ 5564.806559] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:64388 to 0x280000400:64673) [ 5565.763660] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5578.823504] Lustre: DEBUG MARKER: == recovery-small test 150: statfs when MDT0 offline with lazystatfs option ========================================================== 12:21:16 (1713284476) [ 5579.271948] Lustre: Failing over lustre-MDT0000 [ 5579.326288] Lustre: server umount lustre-MDT0000 complete [ 5583.469018] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5583.645458] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5583.649022] Lustre: Skipped 12 previous similar messages [ 5584.726984] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5586.190342] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5588.364514] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5588.366592] Lustre: Skipped 9 previous similar messages [ 5588.646193] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 5588.647939] Lustre: Skipped 9 previous similar messages [ 5588.659705] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8965 to 0x280000bd0:9025) [ 5588.662565] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:65) [ 5595.345637] Lustre: DEBUG MARKER: == recovery-small test 152: QoS object allocation could be awakened in case of OST failover ========================================================== 12:21:33 (1713284493) [ 5596.510583] Lustre: DEBUG MARKER: SKIP: recovery-small test_152 MDS Linux kernel does not support killable semaphore [ 5598.742538] Lustre: DEBUG MARKER: == recovery-small test 153: evict vs reconnect race ====== 12:21:36 (1713284496) [ 5619.684859] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713284501/real 1713284501] req@ffff88012f1bc700 x1796503193676672/t0(0) o400->lustre-MDT0000-lwp-MDT0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713284517 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5619.699490] Lustre: lustre-MDT0000: Received new LWP connection from 0@lo, keep former export from same NID [ 5619.704034] Lustre: *** cfs_fail_loc=174, val=0*** [ 5619.706167] Lustre: Skipped 1 previous similar message [ 5622.593986] Lustre: Failing over lustre-MDT0000 [ 5622.689201] Lustre: server umount lustre-MDT0000 complete [ 5623.707722] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5623.712531] LustreError: Skipped 8 previous similar messages [ 5626.137531] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5627.153393] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5628.134461] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5631.337026] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8965 to 0x280000bd0:9057) [ 5631.337041] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:97) [ 5636.309835] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713284518/real 1713284518] req@ffff880089bd9180 x1796503193680896/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 0 to 1 dl 1713284534 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5636.323635] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 10 previous similar messages [ 5637.350243] Lustre: DEBUG MARKER: == recovery-small test 154a: corruption update llog can be skipped ========================================================== 12:22:15 (1713284535) [ 5637.794764] Lustre: Failing over lustre-MDT0001 [ 5637.851601] Lustre: server umount lustre-MDT0001 complete [ 5640.132514] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) [ 5643.158605] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5644.375367] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5645.682052] Lustre: Failing over lustre-MDT0000 [ 5645.741228] Lustre: server umount lustre-MDT0000 complete [ 5648.420436] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5649.647079] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5651.097938] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 20 [ 5653.590405] LustreError: 26249:0:(llog_osd.c:268:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: bad log [0x240000408:0x1:0x0] header magic: 0x283fdff1 (expected 0x10645539) [ 5653.599101] Lustre: 26249:0:(lod_sub_object.c:981:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: renew invalid update log [0x240000408:0x1:0x0]: rc = -22 [ 5653.610115] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000400:64388 to 0x280000400:64705) [ 5653.610170] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:2630 to 0x2c0000402:2785) [ 5653.652568] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8965 to 0x280000bd0:9089) [ 5653.652612] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:129) [ 5660.936029] Lustre: DEBUG MARKER: == recovery-small test 154b: restore update llog after failed recovery ========================================================== 12:22:38 (1713284558) [ 5661.604421] Lustre: Failing over lustre-MDT0000 [ 5661.674760] Lustre: server umount lustre-MDT0000 complete [ 5665.055469] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5665.212579] LustreError: 28339:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 sleeping for 5000ms [ 5665.215788] LustreError: 28339:0:(lod_dev.c:475:lod_sub_recovery_thread()) Skipped 1 previous similar message [ 5665.218940] Lustre: lustre-MDT0000: Aborting client recovery [ 5665.220179] LustreError: 28310:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 5665.222356] Lustre: 28340:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5665.226343] Lustre: 28340:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 5670.217809] LustreError: 28339:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 awake [ 5670.222466] LustreError: 28339:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 5, retries 0, failed: rc = -5 [ 5670.229241] Lustre: 28340:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client e2bd754c-0a2f-40b0-b3a8-1ab4363f5759@ [ 5670.237739] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 5670.241854] Lustre: 28340:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5670.248628] Lustre: lustre-MDT0000-osd: cancel update llog [0x200009870:0x1:0x0] [ 5670.289196] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:161) [ 5670.289210] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8965 to 0x280000bd0:9121) [ 5671.385481] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5672.820754] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 30 [ 5680.272164] Lustre: DEBUG MARKER: == recovery-small test 155: failover after client remount ========================================================== 12:22:58 (1713284578) [ 5683.755270] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5684.481753] Lustre: Failing over lustre-MDT0000 [ 5684.568009] Lustre: server umount lustre-MDT0000 complete [ 5698.877915] LDISKFS-fs (dm-0): recovery complete [ 5698.879343] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5700.091351] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5701.342030] Lustre: lustre-MDT0000: Denying connection for new client e014db74-e380-4ec0-933a-d7c9563ab144 (at 192.168.201.49@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 5702.889538] Lustre: lustre-MDT0000: Denying connection for new client e014db74-e380-4ec0-933a-d7c9563ab144 (at 192.168.201.49@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:57 [ 5704.042003] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:163 to 0x2c0000403:193) [ 5704.042087] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000bd0:8965 to 0x280000bd0:9153) [ 5711.063869] Lustre: DEBUG MARKER: == recovery-small test 156: tot_granted miscount after client eviction ========================================================== 12:23:28 (1713284608) [ 5711.730942] Lustre: Setting parameter general.timeout in log params [ 5714.888394] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 5715.820676] Lustre: Failing over lustre-OST0000 [ 5716.049768] Lustre: server umount lustre-OST0000 complete [ 5730.265184] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5730.421500] LDISKFS-fs (dm-2): recovery complete [ 5730.422626] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5732.208492] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing set_default_debug -1 all [ 5770.523842] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 5770.527443] Lustre: 1432:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client e014db74-e380-4ec0-933a-d7c9563ab144@192.168.201.49@tcp [ 5770.534514] Lustre: 1432:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 5770.539477] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 5770.543232] Lustre: 1432:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-OST0000: extended recovery timer reached hard limit: 45, extend: 1 [ 5770.552078] Lustre: 1432:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 5770.556906] LustreError: dumping log to /tmp/lustre-log.1713284668.1432 [ 5777.035321] Lustre: DEBUG MARKER: oleg149-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5777.380181] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5781.494535] Lustre: Modifying parameter general.timeout in log params [ 5784.309277] Lustre: DEBUG MARKER: == recovery-small test 157: eviction during mmaped i/o === 12:24:42 (1713284682) [ 5785.738111] Lustre: 3012:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting e014db74-e380-4ec0-933a-d7c9563ab144 at adminstrative request [ 5785.745611] Lustre: 3012:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 5790.494659] Lustre: DEBUG MARKER: == recovery-small test complete, duration 5693 sec ======= 12:24:48 (1713284688) [ 5873.756500] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5873.760055] Lustre: Skipped 10 previous similar messages [ 5879.790473] Lustre: server umount lustre-MDT0000 complete [ 5882.731060] LustreError: 18816:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713284781 with bad export cookie 14793140911901419 [ 5882.737141] LustreError: 18816:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 5882.882983] Lustre: server umount lustre-MDT0001 complete [ 5895.771438] Lustre: server umount lustre-OST0000 complete [ 5908.764316] Lustre: server umount lustre-OST0001 complete [ 5911.106597] device-mapper: core: cleaned up [ 5914.062170] Lustre: DEBUG MARKER: oleg149-server.virtnet: executing unload_modules_local [ 5914.850128] Key type lgssc unregistered [ 5914.940412] LNet: 6317:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5914.945026] LNet: Removed LNI 192.168.201.149@tcp