[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 3.0.0 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f53f0-0x000f53ff] mapped at [ffffffffff2003f0] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5200 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1d87 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1c23 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01BE3 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1c97 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1d27 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1d5f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 295135766 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.457464] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.460008] pid_max: default: 32768 minimum: 301 [ 0.461545] Security Framework initialized [ 0.462817] SELinux: Initializing. [ 0.465658] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.470339] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.473172] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.474682] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.477416] Initializing cgroup subsys memory [ 0.478496] Initializing cgroup subsys devices [ 0.479902] Initializing cgroup subsys freezer [ 0.480995] Initializing cgroup subsys net_cls [ 0.481880] Initializing cgroup subsys blkio [ 0.482710] Initializing cgroup subsys perf_event [ 0.483652] Initializing cgroup subsys hugetlb [ 0.484727] Initializing cgroup subsys pids [ 0.485587] Initializing cgroup subsys net_prio [ 0.486722] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.489926] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.491274] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.492568] tlb_flushall_shift: 6 [ 0.493599] FEATURE SPEC_CTRL Present [ 0.494808] FEATURE IBPB_SUPPORT Present [ 0.496389] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.497690] Spectre V2 : Vulnerable [ 0.498462] Speculative Store Bypass: Vulnerable [ 0.500582] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.508060] ACPI: Core revision 20130517 [ 0.510543] ACPI: All ACPI Tables successfully acquired [ 0.511934] ftrace: allocating 30294 entries in 119 pages [ 0.557988] Enabling x2apic [ 0.558927] Enabled x2apic [ 0.559933] Switched APIC routing to physical x2apic. [ 0.562829] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.564706] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.567690] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.569990] ... version: 2 [ 0.571278] ... bit width: 48 [ 0.572211] ... generic registers: 4 [ 0.573068] ... value mask: 0000ffffffffffff [ 0.574597] ... max period: 00007fffffffffff [ 0.575761] ... fixed-purpose events: 3 [ 0.576582] ... event mask: 000000070000000f [ 0.578148] KVM setup paravirtual spinlock [ 0.581244] smpboot: Booting Node 0, Processors #1[ 0.583066] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.586617] KVM setup async PF for cpu 1 [ 0.588051] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.591264] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.594123] KVM setup async PF for cpu 2 [ 0.594615] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.597576] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.599122] Brought up 4 CPUs [ 0.599157] KVM setup async PF for cpu 3 [ 0.599166] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.601214] smpboot: Max logical packages: 1 [ 0.602000] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.605144] devtmpfs: initialized [ 0.606510] x86/mm: Memory block size: 128MB [ 0.612138] EVM: security.selinux [ 0.613456] EVM: security.ima [ 0.614551] EVM: security.capability [ 0.618993] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.621645] NET: Registered protocol family 16 [ 0.624108] cpuidle: using governor haltpoll [ 0.626309] ACPI: bus type PCI registered [ 0.628118] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.631510] PCI: Using configuration type 1 for base access [ 0.633895] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.644587] ACPI: Added _OSI(Module Device) [ 0.646315] ACPI: Added _OSI(Processor Device) [ 0.647422] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.648466] ACPI: Added _OSI(Processor Aggregator Device) [ 0.650075] ACPI: Added _OSI(Linux-Dell-Video) [ 0.655265] ACPI: Interpreter enabled [ 0.656231] ACPI: (supports S0 S3 S4 S5) [ 0.657139] ACPI: Using IOAPIC for interrupt routing [ 0.659038] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.662941] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.670667] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.673345] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.675052] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.676428] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.679499] acpiphp: Slot [2] registered [ 0.680564] acpiphp: Slot [5] registered [ 0.681411] acpiphp: Slot [6] registered [ 0.682255] acpiphp: Slot [7] registered [ 0.683060] acpiphp: Slot [8] registered [ 0.684595] acpiphp: Slot [9] registered [ 0.686124] acpiphp: Slot [10] registered [ 0.687679] acpiphp: Slot [3] registered [ 0.689103] acpiphp: Slot [4] registered [ 0.690663] acpiphp: Slot [11] registered [ 0.692032] acpiphp: Slot [12] registered [ 0.692936] acpiphp: Slot [13] registered [ 0.693814] acpiphp: Slot [14] registered [ 0.694646] acpiphp: Slot [15] registered [ 0.695502] acpiphp: Slot [16] registered [ 0.696494] acpiphp: Slot [17] registered [ 0.698130] acpiphp: Slot [18] registered [ 0.699715] acpiphp: Slot [19] registered [ 0.700694] acpiphp: Slot [20] registered [ 0.701725] acpiphp: Slot [21] registered [ 0.702695] acpiphp: Slot [22] registered [ 0.703665] acpiphp: Slot [23] registered [ 0.704883] acpiphp: Slot [24] registered [ 0.705898] acpiphp: Slot [25] registered [ 0.707769] acpiphp: Slot [26] registered [ 0.708859] acpiphp: Slot [27] registered [ 0.709919] acpiphp: Slot [28] registered [ 0.710948] acpiphp: Slot [29] registered [ 0.712132] acpiphp: Slot [30] registered [ 0.713354] acpiphp: Slot [31] registered [ 0.714278] PCI host bridge to bus 0000:00 [ 0.715398] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.718193] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.721094] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.724209] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.727315] pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38007fffffff window] [ 0.730651] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.746876] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.749760] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.752347] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.755241] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.759912] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.762889] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.952473] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.955394] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.957775] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.960175] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.962424] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.966038] vgaarb: loaded [ 0.967251] SCSI subsystem initialized [ 0.968580] ACPI: bus type USB registered [ 0.969918] usbcore: registered new interface driver usbfs [ 0.971750] usbcore: registered new interface driver hub [ 0.973519] usbcore: registered new device driver usb [ 0.975845] PCI: Using ACPI for IRQ routing [ 0.978050] NetLabel: Initializing [ 0.978961] NetLabel: domain hash size = 128 [ 0.980050] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.981316] NetLabel: unlabeled traffic allowed by default [ 0.983386] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.984985] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.989035] amd_nb: Cannot enumerate AMD northbridges [ 0.990904] Switched to clocksource kvm-clock [ 1.006942] pnp: PnP ACPI init [ 1.008074] ACPI: bus type PNP registered [ 1.010721] pnp: PnP ACPI: found 6 devices [ 1.012119] ACPI: bus type PNP unregistered [ 1.024566] NET: Registered protocol family 2 [ 1.026093] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.029176] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.032622] TCP: Hash tables configured (established 32768 bind 32768) [ 1.034633] TCP: reno registered [ 1.035849] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.037842] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.040397] NET: Registered protocol family 1 [ 1.042495] RPC: Registered named UNIX socket transport module. [ 1.043858] RPC: Registered udp transport module. [ 1.045226] RPC: Registered tcp transport module. [ 1.046714] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.048819] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.050085] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.051497] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.053038] Unpacking initramfs... [ 2.365261] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 2.369062] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.371158] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 2.374215] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.376712] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.378366] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.380061] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.383362] cryptomgr_test (52) used greatest stack depth: 14480 bytes left [ 2.383810] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.383855] Initialise system trusted keyring [ 2.417349] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.419240] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.425333] zpool: loaded [ 2.426159] zbud: loaded [ 2.427392] VFS: Disk quotas dquot_6.6.0 [ 2.428644] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.431253] NFS: Registering the id_resolver key type [ 2.432790] Key type id_resolver registered [ 2.433965] Key type id_legacy registered [ 2.435181] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.437601] Key type big_key registered [ 2.439802] cryptomgr_test (58) used greatest stack depth: 14048 bytes left [ 2.441104] cryptomgr_test (60) used greatest stack depth: 13664 bytes left [ 2.443581] NET: Registered protocol family 38 [ 2.444911] Key type asymmetric registered [ 2.446167] Asymmetric key parser 'x509' registered [ 2.447786] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.450192] io scheduler noop registered [ 2.451433] io scheduler deadline registered (default) [ 2.453126] io scheduler cfq registered [ 2.454343] io scheduler mq-deadline registered [ 2.455813] io scheduler kyber registered [ 2.459432] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.461108] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.463295] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.465564] ACPI: Power Button [PWRF] [ 2.467163] GHES: HEST is not enabled! [ 2.530575] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.597646] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.703105] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 2.756410] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 2.874226] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 2.903126] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.933916] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.936779] Non-volatile memory driver v1.3 [ 2.937695] Linux agpgart interface v0.103 [ 2.938762] crash memory driver: version 1.1 [ 2.940073] nbd: registered device at major 43 [ 2.951082] virtio_blk virtio1: [vda] 67344 512-byte logical blocks (34.4 MB/32.8 MiB) [ 2.960558] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 2.973889] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.985923] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 2.996807] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.007202] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.015362] rdac: device handler registered [ 3.016792] hp_sw: device handler registered [ 3.018025] emc: device handler registered [ 3.019319] libphy: Fixed MDIO Bus: probed [ 3.022596] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.024507] ehci-pci: EHCI PCI platform driver [ 3.025835] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.027613] ohci-pci: OHCI PCI platform driver [ 3.028996] uhci_hcd: USB Universal Host Controller Interface driver [ 3.031109] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 3.034558] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 3.035927] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 3.037665] mousedev: PS/2 mouse device common for all mice [ 3.039874] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 3.042793] rtc_cmos 00:05: RTC can wake from S4 [ 3.045958] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 3.048681] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 3.051028] hidraw: raw HID events driver (C) Jiri Kosina [ 3.052210] usbcore: registered new interface driver usbhid [ 3.053113] usbhid: USB HID core driver [ 3.053939] drop_monitor: Initializing network drop monitor service [ 3.055060] Netfilter messages via NETLINK v0.30. [ 3.056196] TCP: cubic registered [ 3.057137] Initializing XFRM netlink socket [ 3.058387] NET: Registered protocol family 10 [ 3.060093] NET: Registered protocol family 17 [ 3.061548] Key type dns_resolver registered [ 3.063076] mce: Using 10 MCE banks [ 3.064527] Loading compiled-in X.509 certificates [ 3.066741] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 3.068565] registered taskstats version 1 [ 3.071488] modprobe (72) used greatest stack depth: 13456 bytes left [ 3.075324] Key type trusted registered [ 3.079647] Key type encrypted registered [ 3.080762] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 3.083909] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 3.087441] rtc_cmos 00:05: setting system clock to 2024-04-18 07:21:59 UTC (1713424919) [ 3.089446] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 3.091090] Write protecting the kernel read-only data: 12288k [ 3.092567] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 3.094842] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 3.102296] random: systemd: uninitialized urandom read (16 bytes read) [ 3.104972] random: systemd: uninitialized urandom read (16 bytes read) [ 3.107044] random: systemd: uninitialized urandom read (16 bytes read) [ 3.110347] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 3.115442] systemd[1]: Detected virtualization kvm. [ 3.117019] systemd[1]: Detected architecture x86-64. [ 3.118636] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 3.122393] systemd[1]: No hostname configured. [ 3.123783] systemd[1]: Set hostname to . [ 3.125387] random: systemd: uninitialized urandom read (16 bytes read) [ 3.127271] systemd[1]: Initializing machine ID from random generator. [ 3.137315] ln (88) used greatest stack depth: 13008 bytes left [ 3.169729] random: systemd: uninitialized urandom read (16 bytes read) [ 3.172023] random: systemd: uninitialized urandom read (16 bytes read) [ 3.174400] random: systemd: uninitialized urandom read (16 bytes read) [ 3.176856] random: systemd: uninitialized urandom read (16 bytes read) [ 3.180414] random: systemd: uninitialized urandom read (16 bytes read) [ 3.182809] random: systemd: uninitialized urandom read (16 bytes read) [ 3.193757] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 3.197638] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 3.200834] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 3.205161] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 3.208550] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 3.213183] systemd[1]: Starting Journal Service... Starting Journal Service... [ 3.218014] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 3.222782] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 3.229582] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 3.233496] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 3.237188] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 3.240291] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 3.243157] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 3.246665] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 3.252238] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 3.257345] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. [ OK ] Started Load Kernel Modules. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ OK ] Started Setup Virtual Console. Starting Create Static Device Nodes in /dev... Starting Apply Kernel Variables... [ OK ] Started Create Static Device Nodes in /dev. [ OK ] Started Apply Kernel Variables. [ 3.384013] tsc: Refined TSC clocksource calibration: 2399.956 MHz [ 3.449199] random: fast init done [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ 3.738177] scsi host0: ata_piix [ 3.741992] scsi host1: ata_piix [ 3.744747] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 3.746810] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Started udev Coldplug all Devices. Starting Show Plymouth Boot Screen... Starting dracut initqueue hook... [ OK ] Reached target System Initialization. [ OK ] Started Show Plymouth Boot Screen. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Paths. [ OK ] Reached target Basic System. %G[ 3.852882] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 3.854334] ip (344) used greatest stack depth: 12464 bytes left [ 3.883779] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ 4.023981] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 6.011085] dracut-initqueue[295]: RTNETLINK answers: File exists [ 6.181817] dracut-initqueue[295]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Mounting /sysroot... [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... [ 6.807236] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... [ OK ] Stopped target Timers. Starting Plymouth switch root service... [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped target Basic System. [ OK ] Stopped target Paths. [ OK ] Stopped target Slices. [ OK ] Stopped target System Initialization. [ OK ] Stopped target Local File Systems. [ OK ] Stopped target Swap. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped target Sockets. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Kernel Socket. [ OK ] Closed udev Control Socket. Starting Cleanup udevd DB... [ OK ] Started Plymouth switch root service. [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. Starting Switch Root... [ 7.192832] systemd-journald[100]: Received SIGTERM from PID 1 (n/a). [ 7.371769] SELinux: Disabled at runtime. [ 7.434491] ip_tables: (C) 2000-2006 Netfilter Core Team [ 7.442442] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Reached target Local Encrypted Volumes. Mounting POSIX Message Queue File System... [ OK ] Listening on udev Control Socket. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. Starting Create list of required st... nodes for the current kernel... [ OK ] Listening on udev Kernel Socket. Starting udev Coldplug all Devices... [ OK ] Started Forward Password Requests to Wall Directory Watch. Starting Read and set NIS domainname from /etc/sysconfig/network... Mounting Debug File System... [ OK ] Reached target rpc_pipefs.target. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Created slice system-getty.slice. Starting Load Kernel Modules... Starting Set Up Additional Binary Formats... [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd Root File System. [ OK ] Stopped target Initrd File Systems. Mounting Huge Pages File System... Starting Remount Root and Kernel File Systems... [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Load Kernel Modules. [ OK ] Mounted Huge Pages File System. [ OK ] Started Journal Service. Mounting Arbitrary Executable File Formats File System... Starting Apply Kernel Variables... Starting Create Static Device Nodes in /dev... [ OK ] Started udev Coldplug all Devices. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [ OK ] Started Set Up Additional Binary Formats. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Configure read-only root support... Starting Flush Journal to Persistent Storage... [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... [ OK ] Mounted /mnt. [ 7.921695] systemd-journald[566]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 8.060458] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 8.096567] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ OK ] Found device /dev/ttyS0. [ OK ] Found device /dev/ttyS1. [ 8.129304] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ OK ] Found device /dev/vda. [ 8.161513] AVX version of gcm_enc/dec engaged. [ 8.166112] AES CTR mode by8 optimization enabled Mounting /home/green/git/lustre-release... [ 8.198767] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ OK ] Mounted /home/green/git/lustre-release. [ 8.206328] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ 8.212757] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ OK ] Activated swap /dev/disk/by-label/SWAP. [ 8.217522] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ OK ] Reached target Swap. %G[ 8.343294] EDAC MC: Ver: 3.0.0 [ 8.350319] EDAC sbridge: Ver: 1.1.2 [ 10.471665] mount.nfs (768) used greatest stack depth: 10704 bytes left [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Mark the need to relabel after reboot... Starting Tell Plymouth To Write Out Runtime Data... Starting Rebuild Journal Catalog... Starting Preprocess NFS configuration... Starting Load/Save Random Seed... Starting Create Volatile Files and Directories... [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. Starting Update is Completed... [ OK ] Started Load/Save Random Seed. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Tell Plymouth To Write Out Runtime Data. [ OK ] Started Update is Completed. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Reached target System Initialization. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. [ OK ] Started D-Bus System Message Bus. Starting Network Manager... Starting GSSAPI Proxy Daemon... Starting Login Service... Starting Dump dmesg to /var/log/dmesg... [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... Starting Network Manager Wait Online... Starting Hostname Service... [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Terminate Plymouth Boot Screen... Starting Wait for Plymouth Boot Screen to Quit... [ OK ] Started OpenSSH server daemon. [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg155-server login: [ 19.111489] device-mapper: uevent: version 1.0.3 [ 19.113067] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 23.257242] libcfs: loading out-of-tree module taints kernel. [ 23.258742] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 23.282366] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_hostid [ 28.111357] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing load_modules_local [ 28.321176] alg: No test for adler32 (adler32-zlib) [ 29.072031] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 29.192585] Lustre: Lustre: Build Version: 2.15.62_23_gb559b30 [ 29.352538] LNet: Added LNI 192.168.201.155@tcp [8/256/0/180] [ 29.353874] LNet: Accept secure, port 988 [ 30.897156] Key type lgssc registered [ 31.300570] Lustre: Echo OBD driver; http://www.lustre.org/ [ 33.995080] icp: module license 'CDDL' taints kernel. [ 33.996373] Disabling lock debugging due to kernel taint [ 36.509496] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 39.742351] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 43.980087] LDISKFS-fs (vdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 45.953764] LDISKFS-fs (vde): file extents enabled, maximum tree depth=5 [ 45.957719] LDISKFS-fs (vde): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 47.970742] LDISKFS-fs (vdf): file extents enabled, maximum tree depth=5 [ 47.974454] LDISKFS-fs (vdf): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 51.023049] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing load_modules_local [ 54.092471] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 54.113389] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 54.120955] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 55.228281] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 55.239174] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 55.278758] Lustre: lustre-MDT0000: new disk, initializing [ 55.298514] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 55.306288] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 55.331571] mount.lustre (6903) used greatest stack depth: 10000 bytes left [ 56.047374] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 60.259332] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 60.280441] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 60.302651] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 60.313583] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space. [ 60.315146] Lustre: Skipped 1 previous similar message [ 60.350734] Lustre: lustre-MDT0001: new disk, initializing [ 60.369535] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 60.376405] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 60.384909] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 61.236452] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 66.677779] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 66.683784] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 66.706991] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 66.712283] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 66.809749] Lustre: lustre-OST0000: new disk, initializing [ 66.812866] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 66.828204] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 68.593036] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 72.657753] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 72.662544] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 72.675979] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 73.056955] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 73.059824] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 73.076315] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 73.078984] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 73.100563] Lustre: lustre-OST0001: new disk, initializing [ 73.103017] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 73.116302] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 74.314279] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 74.789428] random: crng init done [ 79.059457] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 79.332938] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 79.336881] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 79.345396] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 86.376160] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 92.001476] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing check_logdir /tmp/testlogs/ [ 92.837802] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing yml_node [ 93.785412] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 94.438954] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 95.720177] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 96.777647] Lustre: DEBUG MARKER: -----============= acceptance-small: recovery-small ============----- Thu Apr 18 03:23:32 EDT 2024 [ 99.535939] Lustre: DEBUG MARKER: excepting tests: 136 [ 100.166916] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing check_config_client /mnt/lustre [ 104.828104] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 105.650565] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 106.219779] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 107.961165] Lustre: DEBUG MARKER: == recovery-small test 1: create, chmod, stat: drop req, drop rep ========================================================== 03:23:43 (1713425023) [ 108.205785] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 124.215392] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 124.687516] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 124.689464] LustreError: 6925:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012ce0dc00 x1796656269891648/t4294967300(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:117/0 lens 520/448 e 0 to 0 dl 1713425052 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 140.697065] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 140.704659] Lustre: 6926:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009d1e9180 x1796656269891648/t4294967300(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:133/0 lens 520/2880 e 0 to 0 dl 1713425068 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 141.182422] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 157.196933] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 157.678515] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 157.681265] LustreError: 6924:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012dff0700 x1796656269893952/t4294967302(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:150/0 lens 488/456 e 0 to 0 dl 1713425085 ref 1 fl Interpret:/200/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 173.691089] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 173.697390] Lustre: 6925:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012faa1180 x1796656269893952/t4294967302(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:166/0 lens 488/3152 e 0 to 0 dl 1713425101 ref 1 fl Interpret:/202/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 174.187606] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 190.197737] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 190.663715] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 190.664933] LustreError: 9217:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009d2b3800 x1796656269895744/t0(0) o34->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:183/0 lens 472/464 e 0 to 0 dl 1713425118 ref 1 fl Interpret:/200/0 rc 0/0 job:'statone.0' uid:0 gid:0 [ 206.680558] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 209.803800] Lustre: DEBUG MARKER: == recovery-small test 4: open: drop req, drop rep ======= 03:25:25 (1713425125) [ 210.072450] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 226.086696] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 226.561560] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 226.562755] LustreError: 6928:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009de52300 x1796656269898752/t4294967308(0) o35->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:218/0 lens 392/456 e 0 to 0 dl 1713425153 ref 1 fl Interpret:/200/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 242.563983] Lustre: 6928:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009d8c7800 x1796656269898752/t4294967308(0) o35->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:234/0 lens 392/456 e 0 to 0 dl 1713425169 ref 1 fl Interpret:/202/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 245.677017] Lustre: DEBUG MARKER: == recovery-small test 5: rename: drop req, drop rep ===== 03:26:01 (1713425161) [ 245.946322] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 261.962118] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 261.965555] Lustre: Skipped 1 previous similar message [ 262.435384] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 262.438074] LustreError: 6939:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009e340a80 x1796656269902208/t4294967312(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:254/0 lens 552/456 e 0 to 0 dl 1713425189 ref 1 fl Interpret:/200/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 278.436680] Lustre: 6939:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012bbcad80 x1796656269902208/t4294967312(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:270/0 lens 552/2888 e 0 to 0 dl 1713425205 ref 1 fl Interpret:/202/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 281.528222] Lustre: DEBUG MARKER: == recovery-small test 6: link, unlink: drop req, drop rep ========================================================== 03:26:37 (1713425197) [ 281.787883] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 298.252225] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 298.253323] LustreError: 11066:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009d2b2300 x1796656269906048/t4294967317(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:290/0 lens 512/440 e 0 to 0 dl 1713425225 ref 1 fl Interpret:/200/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 314.253548] Lustre: 6926:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012bb20380 x1796656269906048/t4294967317(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:306/0 lens 512/440 e 0 to 0 dl 1713425241 ref 1 fl Interpret:/202/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 314.706813] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 315.171674] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 315.173744] LustreError: 6926:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880093673100 x1796656269907840/t4294967319(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:307/0 lens 504/456 e 0 to 0 dl 1713425242 ref 1 fl Interpret:/200/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 331.098033] Lustre: 3485:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425231/real 1713425231] req@ffff880093673b80 x1796656275208960/t0(0) o400->lustre-MDT0000-lwp-MDT0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713425247 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 331.182803] Lustre: lustre-MDT0000: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 331.186870] Lustre: Skipped 3 previous similar messages [ 331.197513] Lustre: 6925:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009ccbd880 x1796656269907840/t4294967319(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:323/0 lens 504/2888 e 0 to 0 dl 1713425258 ref 1 fl Interpret:/202/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 334.305387] Lustre: DEBUG MARKER: == recovery-small test 8: touch: drop rep (bug 1423) ===== 03:27:29 (1713425249) [ 350.546122] Lustre: 6925:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009e074700 x1796656269909504/t4294967322(0) o36->7426d693-a074-418f-aac7-6018ed9794f8@192.168.201.55@tcp:342/0 lens 488/3152 e 0 to 0 dl 1713425277 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 353.608767] Lustre: DEBUG MARKER: == recovery-small test 9: pause bulk on OST (bug 1420) === 03:27:49 (1713425269) [ 354.090593] LustreError: 9202:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 5000ms [ 359.092992] LustreError: 9202:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 362.382971] Lustre: DEBUG MARKER: == recovery-small test 10a: finish request on server after client eviction (bug 1521) ========================================================== 03:27:57 (1713425277) [ 378.449111] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425278/real 1713425278] req@ffff88009e0f3480 x1796656275220608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425294 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 380.789051] Lustre: 9197:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425281/real 1713425281] req@ffff88009e0af480 x1796656275221824/t0(0) o104->lustre-OST0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425297 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 380.795533] Lustre: 9197:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 394.462061] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425294/real 1713425294] req@ffff88009e0f3480 x1796656275220608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425310 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 402.546032] Lustre: mdt00_003: service thread pid 8781 was inactive for 40.096 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 402.554925] Pid: 8781, comm: mdt00_003 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 402.559104] Call Trace: [ 402.560947] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 402.563661] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 402.566056] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 402.569659] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 402.572185] [<0>] ldlm_cli_enqueue_local+0x1ec/0x880 [ptlrpc] [ 402.574908] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 402.577593] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 402.580367] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 402.583266] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 402.585729] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 402.588182] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 402.590772] [<0>] mdt_reint+0x67/0x150 [mdt] [ 402.593371] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 402.596307] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 402.599835] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 402.601875] [<0>] kthread+0xe4/0xf0 [ 402.603549] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 402.606268] [<0>] 0xfffffffffffffffe [ 404.850016] Lustre: ll_ost00_001: service thread pid 9196 was inactive for 40.060 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 404.850024] Lustre: ll_ost00_002: service thread pid 9197 was inactive for 40.060 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 404.850035] Pid: 9197, comm: ll_ost00_002 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 404.850036] Call Trace: [ 404.850179] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 404.850260] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 404.850331] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 404.850436] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 404.850533] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 404.850562] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 404.850574] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 404.850680] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 404.850746] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 404.850808] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 404.850818] [<0>] kthread+0xe4/0xf0 [ 404.850823] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 404.850843] [<0>] 0xfffffffffffffffe [ 404.851046] Lustre: ll_ost00_000: service thread pid 9195 was inactive for 40.061 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 404.908584] Pid: 9196, comm: ll_ost00_001 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 404.911398] Call Trace: [ 404.912560] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 404.914633] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 404.916422] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 404.918403] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 404.920114] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 404.922017] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 404.924265] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 404.927049] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 404.930523] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 404.933533] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 404.935862] [<0>] kthread+0xe4/0xf0 [ 404.937559] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 404.939654] [<0>] 0xfffffffffffffffe [ 410.470039] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425310/real 1713425310] req@ffff88009e0f3480 x1796656275220608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425326 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 410.477954] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 426.480188] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425326/real 1713425326] req@ffff88009e0f3480 x1796656275220608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425342 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 426.498235] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 442.504133] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425342/real 1713425342] req@ffff88009e0f3480 x1796656275220608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425358 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 442.517912] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 474.521993] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713425374/real 1713425374] req@ffff88009e0f3480 x1796656275220608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713425390 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 474.529656] Lustre: 8781:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 474.532104] LustreError: 8781:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.55@tcp) failed to reply to blocking AST (req@ffff88009e0f3480 x1796656275220608 status 0 rc -110), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff880077109d40/0xf24e407167df5926 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a0154456b expref: 9 pid: 8781 timeout: 557 lvb_type: 0 [ 474.544175] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.201.55@tcp was evicted due to a lock blocking callback time out: rc -110 [ 474.551710] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 17s: evicting client at 192.168.201.55@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff880077109d40/0xf24e407167df5926 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a0154456b expref: 10 pid: 8781 timeout: 0 lvb_type: 0 [ 476.790317] LustreError: 9195:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.55@tcp) failed to reply to blocking AST (req@ffff88009d34b800 x1796656275221888 status 0 rc -110), evict it ns: filter-lustre-OST0001_UUID lock: ffff880072cb6ac0/0xf24e407167df5846 lrc: 4/0,0 mode: PW/PW res: [0x2c0000401:0x4:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400030020 nid: 192.168.201.55@tcp remote: 0xdb2b24a0154452c expref: 6 pid: 10357 timeout: 560 lvb_type: 0 [ 476.791251] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.201.55@tcp was evicted due to a lock blocking callback time out: rc -110 [ 476.791621] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.201.55@tcp ns: filter-lustre-OST0000_UUID lock: ffff880093664240/0xf24e407167df589a lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x5:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4194303) gid 0 flags: 0x60000400030020 nid: 192.168.201.55@tcp remote: 0xdb2b24a0154454f expref: 8 pid: 10357 timeout: 0 lvb_type: 0 [ 476.834773] LustreError: 9195:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) Skipped 1 previous similar message [ 478.223802] Lustre: DEBUG MARKER: == recovery-small test 10b: re-send BL AST =============== 03:29:53 (1713425393) [ 497.273698] Lustre: DEBUG MARKER: == recovery-small test 10c: re-send BL AST vs reconnect race (LU-5569) ========================================================== 03:30:12 (1713425412) [ 498.353990] Lustre: lustre-MDT0001: Client 7426d693-a074-418f-aac7-6018ed9794f8 (at 192.168.201.55@tcp) reconnecting [ 498.356661] Lustre: Skipped 1 previous similar message [ 501.261740] Lustre: DEBUG MARKER: == recovery-small test 10d: test failed blocking ast ===== 03:30:16 (1713425416) [ 502.796422] LustreError: 21544:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.55@tcp) returned error from blocking AST (req@ffff88009efb4380 x1796656275257920 status -71 rc -71), evict it ns: filter-lustre-OST0000_UUID lock: ffff880093665f80/0xf24e407167df5d32 lrc: 4/0,0 mode: PW/PW res: [0x280000401:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a0154477f expref: 5 pid: 21544 timeout: 602 lvb_type: 0 [ 502.806190] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.201.55@tcp was evicted due to a lock blocking callback time out: rc -71 [ 502.809031] LustreError: Skipped 1 previous similar message [ 502.811286] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.201.55@tcp ns: filter-lustre-OST0000_UUID lock: ffff880093665f80/0xf24e407167df5d32 lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a0154477f expref: 6 pid: 21544 timeout: 0 lvb_type: 0 [ 502.822577] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 506.221051] Lustre: DEBUG MARKER: == recovery-small test 10e: re-send BL AST vs reconnect race 2 ========================================================== 03:30:21 (1713425421) [ 506.568300] Lustre: DEBUG MARKER: SKIP: recovery-small test_10e need two clients [ 508.403087] Lustre: DEBUG MARKER: == recovery-small test 11: wake up a thread waiting for completion after eviction (b=2460) ========================================================== 03:30:23 (1713425423) [ 528.696390] Lustre: DEBUG MARKER: == recovery-small test 12: recover from timed out resend in ptlrpcd (b=2494) ========================================================== 03:30:44 (1713425444) [ 528.949454] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 571.196286] Lustre: DEBUG MARKER: == recovery-small test 13: mdc_readpage restart test (bug 1138) ========================================================== 03:31:26 (1713425486) [ 590.996232] Lustre: DEBUG MARKER: == recovery-small test 14: mdc_readpage resend test (bug 1138) ========================================================== 03:31:46 (1713425506) [ 591.295585] Lustre: *** cfs_fail_loc=106, val=0*** [ 591.297947] Lustre: Skipped 1 previous similar message [ 595.206410] Lustre: DEBUG MARKER: == recovery-small test 15: failed open (-ENOMEM) ========= 03:31:50 (1713425510) [ 595.578385] Lustre: *** cfs_fail_loc=128, val=0*** [ 599.120220] Lustre: DEBUG MARKER: == recovery-small test 16: timeout bulk put, don't evict client (2732) ========================================================== 03:31:54 (1713425514) [ 599.660757] Lustre: *** cfs_fail_loc=504, val=0*** [ 599.664254] LustreError: 9202:0:(ldlm_lib.c:3601:target_bulk_io()) @@@ truncated bulk READ 0(102400) req@ffff88009cd11180 x1796656269954944/t0(0) o3->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:592/0 lens 488/440 e 0 to 0 dl 1713425527 ref 1 fl Interpret:/200/0 rc 0/0 job:'cmp.0' uid:0 gid:0 [ 599.677854] Lustre: lustre-OST0001: Bulk IO read error with fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp), client will retry: rc -110 [ 639.837807] Lustre: DEBUG MARKER: == recovery-small test 17a: timeout bulk get, don't evict client (2732) ========================================================== 03:32:35 (1713425555) [ 685.964440] Lustre: DEBUG MARKER: == recovery-small test 17b: timeout bulk get, dont evict client (3582) ========================================================== 03:33:21 (1713425601) [ 686.382473] Lustre: DEBUG MARKER: SKIP: recovery-small test_17b Needs multiple clients [ 688.968444] Lustre: DEBUG MARKER: == recovery-small test 18a: manual ost invalidate clears page cache immediately ========================================================== 03:33:24 (1713425604) [ 693.544707] Lustre: DEBUG MARKER: == recovery-small test 18b: eviction and reconnect clears page cache (2766) ========================================================== 03:33:28 (1713425608) [ 694.210946] Lustre: 31598:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting fa16e676-c361-4e4f-a826-6f09f8cf22b4 at adminstrative request [ 720.528580] Lustre: DEBUG MARKER: == recovery-small test 18c: Dropped connect reply after eviction handing (14755) ========================================================== 03:33:55 (1713425635) [ 721.166884] Lustre: 32331:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting fa16e676-c361-4e4f-a826-6f09f8cf22b4 at adminstrative request [ 722.566369] Lustre: *** cfs_fail_loc=225, val=0*** [ 722.568522] Lustre: Skipped 1 previous similar message [ 738.808699] Lustre: DEBUG MARKER: == recovery-small test 19a: test expired_lock_main on mds (2867) ========================================================== 03:34:14 (1713425654) [ 739.377075] Lustre: *** cfs_fail_loc=304, val=0*** [ 755.393518] Lustre: lustre-MDT0000: Client fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp) reconnecting [ 755.400055] Lustre: Skipped 5 previous similar messages [ 755.406215] Lustre: *** cfs_fail_loc=304, val=0*** [ 771.413417] Lustre: *** cfs_fail_loc=304, val=0*** [ 779.506127] ptlrpc_watchdog_fire: 1 callbacks suppressed [ 779.508810] Lustre: mdt00_005: service thread pid 11066 was inactive for 40.132 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 779.517479] Pid: 11066, comm: mdt00_005 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 779.521270] Call Trace: [ 779.522799] [<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc] [ 779.525511] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 779.528360] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 779.531141] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 779.533512] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 779.536108] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 779.538587] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 779.540758] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 779.543227] [<0>] mdt_reint+0x67/0x150 [mdt] [ 779.545381] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 779.547981] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 779.550906] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 779.553231] [<0>] kthread+0xe4/0xf0 [ 779.554840] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 779.557274] [<0>] 0xfffffffffffffffe [ 787.442319] Lustre: *** cfs_fail_loc=304, val=0*** [ 803.489506] Lustre: *** cfs_fail_loc=304, val=0*** [ 819.492057] Lustre: *** cfs_fail_loc=304, val=0*** [ 835.497394] Lustre: *** cfs_fail_loc=304, val=0*** [ 839.666133] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 192.168.201.55@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff88009ecf0d80/0xf24e407167df6669 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01544a81 expref: 17 pid: 6924 timeout: 838 lvb_type: 0 [ 844.818221] Lustre: DEBUG MARKER: == recovery-small test 19b: test expired_lock_main on ost (2867) ========================================================== 03:36:00 (1713425760) [ 877.978233] Lustre: *** cfs_fail_loc=304, val=0*** [ 877.981368] Lustre: Skipped 4 previous similar messages [ 941.991764] Lustre: *** cfs_fail_loc=304, val=0*** [ 941.993300] Lustre: Skipped 7 previous similar messages [ 945.650031] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.201.55@tcp ns: filter-lustre-OST0000_UUID lock: ffff880077108480/0xf24e407167df6964 lrc: 3/0,0 mode: PW/PW res: [0x280000401:0xd:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01544c33 expref: 6 pid: 10358 timeout: 945 lvb_type: 0 [ 949.335390] Lustre: DEBUG MARKER: == recovery-small test 19c: check reconnect and lock resend do not trigger expired_lock_main ========================================================== 03:37:44 (1713425864) [ 959.827841] Lustre: DEBUG MARKER: == recovery-small test 20a: ldlm_handle_enqueue error (should return error) ========================================================== 03:37:55 (1713425875) [ 963.106331] Lustre: DEBUG MARKER: == recovery-small test 20b: ldlm_handle_enqueue error (should return error) ========================================================== 03:37:58 (1713425878) [ 966.333969] Lustre: DEBUG MARKER: == recovery-small test 21a: drop close request while close and open are both in flight ========================================================== 03:38:01 (1713425881) [ 966.620607] LustreError: 8781:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 967.922960] LustreError: 8781:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 968.068498] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 987.325841] Lustre: DEBUG MARKER: == recovery-small test 21b: drop open request while close and open are both in flight ========================================================== 03:38:22 (1713425902) [ 1133.106825] Lustre: DEBUG MARKER: == recovery-small test 21c: drop both request while close and open are both in flight ========================================================== 03:40:48 (1713426048) [ 1157.644238] Lustre: DEBUG MARKER: == recovery-small test 21d: drop close reply while close and open are both in flight ========================================================== 03:41:13 (1713426073) [ 1157.999046] LustreError: 698:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 1159.303004] LustreError: 698:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 1159.579459] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 1159.583331] LustreError: 15068:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009ee2c700 x1796656270038336/t4294967536(0) o35->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:396/0 lens 392/456 e 0 to 0 dl 1713426086 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1159.601603] LustreError: 15068:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1175.583390] Lustre: 15068:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880091ae9c50 x1796656270038336/t4294967536(0) o35->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:412/0 lens 392/456 e 0 to 0 dl 1713426102 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1180.096388] Lustre: DEBUG MARKER: == recovery-small test 21e: drop open reply while close and open are both in flight ========================================================== 03:41:35 (1713426095) [ 1180.458415] LustreError: 8781:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800b40ced80 x1796656270043008/t4294967553(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:542/0 lens 488/456 e 0 to 0 dl 1713426232 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1321.478297] Lustre: lustre-MDT0000: Client fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp) reconnecting [ 1321.481222] Lustre: Skipped 21 previous similar messages [ 1321.491316] Lustre: 6926:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009e2a0000 x1796656270043008/t4294967553(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:683/0 lens 488/3152 e 0 to 0 dl 1713426373 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1323.918192] Lustre: DEBUG MARKER: == recovery-small test 21f: drop both reply while close and open are both in flight ========================================================== 03:43:59 (1713426239) [ 1324.367267] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1324.370372] Lustre: Skipped 1 previous similar message [ 1324.373171] LustreError: 6926:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009e2a1f80 x1796656270055488/t4294967572(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:686/0 lens 488/456 e 0 to 0 dl 1713426376 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1342.017099] Lustre: 6924:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009e197800 x1796656270055488/t4294967572(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:704/0 lens 488/3152 e 0 to 0 dl 1713426394 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1342.030506] Lustre: 6924:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1346.928703] Lustre: DEBUG MARKER: == recovery-small test 21g: drop open reply and close request while close and open are both in flight ========================================================== 03:44:22 (1713426262) [ 1347.384481] LustreError: 6924:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009e3e3100 x1796656270061120/t4294967591(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:709/0 lens 488/456 e 0 to 0 dl 1713426399 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1347.400131] LustreError: 6924:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1349.031138] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 1349.033699] Lustre: Skipped 3 previous similar messages [ 1365.034281] Lustre: 11066:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009eecd180 x1796656270061120/t4294967591(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:727/0 lens 488/3152 e 0 to 0 dl 1713426417 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1369.893172] Lustre: DEBUG MARKER: == recovery-small test 21h: drop open request and close reply while close and open are both in flight ========================================================== 03:44:45 (1713426285) [ 1392.779701] Lustre: DEBUG MARKER: == recovery-small test 22: drop close request and do mknod ========================================================== 03:45:08 (1713426308) [ 1412.844438] Lustre: DEBUG MARKER: == recovery-small test 23: client hang when close a file after mds crash ========================================================== 03:45:28 (1713426328) [ 1419.203059] Lustre: Failing over lustre-MDT0000 [ 1419.286984] Lustre: server umount lustre-MDT0000 complete [ 1421.475223] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1421.477007] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1421.492577] Lustre: Skipped 3 previous similar messages [ 1424.168651] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.55@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1424.176392] LustreError: Skipped 3 previous similar messages [ 1426.483541] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1426.490977] LustreError: Skipped 3 previous similar messages [ 1429.175263] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.55@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1431.766897] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1431.826385] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1431.948664] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1431.970174] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1433.138650] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1434.182255] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1436.951812] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1436.969214] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1436.996408] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:21 to 0x2c0000401:65) [ 1436.996447] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:23 to 0x280000401:65) [ 1437.888734] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1438.504171] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1443.593862] Lustre: DEBUG MARKER: == recovery-small test 24a: fsync error (should return error) ========================================================== 03:45:59 (1713426359) [ 1444.076499] Lustre: 14773:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting fa16e676-c361-4e4f-a826-6f09f8cf22b4 at adminstrative request [ 1447.809111] Lustre: DEBUG MARKER: == recovery-small test 24b: test dirty page discard due to client eviction ========================================================== 03:46:03 (1713426363) [ 1448.263246] Lustre: 15489:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting fa16e676-c361-4e4f-a826-6f09f8cf22b4 at adminstrative request [ 1452.140425] Lustre: DEBUG MARKER: == recovery-small test 26a: evict dead exports =========== 03:46:07 (1713426367) [ 1452.587573] Lustre: DEBUG MARKER: SKIP: recovery-small test_26a msg and ost1 are at the same node [ 1454.873573] Lustre: DEBUG MARKER: == recovery-small test 26b: evict dead exports =========== 03:46:10 (1713426370) [ 1455.450424] Lustre: DEBUG MARKER: SKIP: recovery-small test_26b msg and ost1 are at the same node [ 1457.826523] Lustre: DEBUG MARKER: == recovery-small test 27: fail LOV while using OSC's ==== 03:46:13 (1713426373) [ 1459.464606] Lustre: Failing over lustre-MDT0000 [ 1459.573101] Lustre: server umount lustre-MDT0000 complete [ 1461.987718] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1461.989060] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1461.989063] LustreError: Skipped 4 previous similar messages [ 1462.006338] Lustre: Skipped 3 previous similar messages [ 1472.002517] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1472.006301] LustreError: Skipped 12 previous similar messages [ 1472.037874] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1472.077113] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1472.177171] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1472.207930] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1472.950327] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1474.245001] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1477.172744] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1477.174999] Lustre: Skipped 3 previous similar messages [ 1477.180992] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1477.185276] Lustre: 9217:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009efb7480 x1796656270170944/t8589935208(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:714/0 lens 504/2888 e 0 to 0 dl 1713426404 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1477.191784] Lustre: 9217:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1477.201597] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:166 to 0x2c0000401:193) [ 1477.201626] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:166 to 0x280000401:193) [ 1565.583063] Lustre: Failing over lustre-MDT0000 [ 1565.707104] Lustre: server umount lustre-MDT0000 complete [ 1567.331256] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1567.332672] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1567.332676] LustreError: Skipped 1 previous similar message [ 1567.349405] Lustre: Skipped 3 previous similar messages [ 1577.954493] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1577.990121] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1578.087303] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1578.105416] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1579.040539] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1579.412516] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1583.092673] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1583.094731] Lustre: Skipped 3 previous similar messages [ 1583.100814] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 1583.104611] Lustre: 698:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012df05f80 x1796656275628416/t12884937087(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:65/0 lens 512/2888 e 0 to 0 dl 1713426510 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1583.119014] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6030 to 0x2c0000401:6049) [ 1583.119603] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6030 to 0x280000401:6049) [ 1586.183820] Lustre: DEBUG MARKER: == recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 03:48:21 (1713426501) [ 1602.263039] Lustre: 6926:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713426502/real 1713426502] req@ffff88009f6aea00 x1796656276857024/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713426518 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 1602.272939] Lustre: 6926:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 1604.295776] Lustre: Failing over lustre-MDT0000 [ 1604.366775] Lustre: server umount lustre-MDT0000 complete [ 1604.454615] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.55@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1604.461092] LustreError: Skipped 12 previous similar messages [ 1608.130983] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1608.137855] Lustre: Skipped 2 previous similar messages [ 1616.577926] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1616.620554] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1616.680856] Lustre: *** cfs_fail_loc=12f, val=0*** [ 1616.683992] LustreError: 8074:0:(tgt_lastrcvd.c:1071:tgt_client_new()) lustre-MDT0001: no room for 0 clients - fix LR_MAX_CLIENTS [ 1616.691694] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_connect to node 0@lo failed: rc = -75 [ 1616.706212] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1616.720082] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1617.609331] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1619.477355] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1621.718101] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1621.722188] Lustre: Skipped 3 previous similar messages [ 1621.730713] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1621.748523] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6030 to 0x2c0000401:6081) [ 1621.748543] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6051 to 0x280000401:6081) [ 1622.393909] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1622.826463] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1627.588107] Lustre: DEBUG MARKER: == recovery-small test 29a: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 03:49:03 (1713426543) [ 1628.374097] Lustre: Failing over lustre-MDT0000 [ 1628.451388] Lustre: server umount lustre-MDT0000 complete [ 1630.915884] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1630.953565] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1631.015997] Lustre: *** cfs_fail_loc=711, val=0*** [ 1631.017113] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1631.017115] Lustre: Skipped 1 previous similar message [ 1631.055688] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1631.071369] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1631.072778] Lustre: lustre-MDT0000: Aborting client recovery [ 1631.072783] LustreError: 27528:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1631.080335] Lustre: 27557:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1636.068374] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1636.071178] Lustre: Skipped 3 previous similar messages [ 1636.078925] Lustre: 27557:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@0@lo [ 1636.082955] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1636.085924] LustreError: 27557:0:(ldlm_lib.c:1844:abort_lock_replay_queue()) @@@ aborted: req@ffff88009c905c00 x1796656276878400/t0(0) o101->lustre-MDT0001-mdtlov_UUID@0@lo:118/0 lens 328/0 e 0 to 0 dl 1713426563 ref 1 fl Complete:/240/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1636.095994] Lustre: 27557:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1636.096009] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation ldlm_enqueue to node 0@lo failed: rc = -107 [ 1636.096478] Lustre: lustre-MDT0000: Denying connection for new client lustre-MDT0001-mdtlov_UUID (at 0@lo), waiting for 2 known clients (1 recovered, 0 in progress, and 1 evicted) already passed deadline 27:15 [ 1636.111268] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1636.118197] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000401:0x1:0x0] [ 1636.143235] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6051 to 0x280000401:6113) [ 1636.143648] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6030 to 0x2c0000401:6113) [ 1636.944767] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1641.075536] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1647.897383] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 [ 1647.954556] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1651.385911] Lustre: DEBUG MARKER: == recovery-small test 29b: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 03:49:26 (1713426566) [ 1652.155510] Lustre: Failing over lustre-OST0000 [ 1652.173079] Lustre: server umount lustre-OST0000 complete [ 1654.355556] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 1654.358798] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 1654.410450] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1654.413990] Lustre: Skipped 4 previous similar messages [ 1654.416229] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1654.422175] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 1654.422308] Lustre: lustre-OST0000: Aborting recovery [ 1654.422312] LustreError: 29833:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 1654.430651] Lustre: Skipped 2 previous similar messages [ 1654.432470] Lustre: 29846:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1654.436252] Lustre: 29846:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 1654.438640] Lustre: 29846:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client fa16e676-c361-4e4f-a826-6f09f8cf22b4@ [ 1654.441985] Lustre: lustre-OST0000: disconnecting 3 stale clients [ 1654.445529] LustreError: 29846:0:(ofd_obd.c:1315:ofd_iocontrol()) lustre-OST0000: iocontrol from 'tgt_recover_0' cmd=c00866c1 _IOWR('f', 193, 8) unrecognized: rc = -25 [ 1654.534767] Lustre: *** cfs_fail_loc=711, val=0*** [ 1655.643466] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1656.324840] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1656.329970] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1656.334023] Lustre: Skipped 4 previous similar messages [ 1668.614716] Lustre: DEBUG MARKER: == recovery-small test 50: failover MDS under load ======= 03:49:44 (1713426584) [ 1679.381619] Lustre: Failing over lustre-MDT0000 [ 1679.459346] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1679.461328] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1679.461330] LustreError: Skipped 11 previous similar messages [ 1679.468409] Lustre: Skipped 2 previous similar messages [ 1679.473348] Lustre: server umount lustre-MDT0000 complete [ 1691.548428] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1691.585399] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1691.685712] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1691.711300] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1691.713778] Lustre: Skipped 2 previous similar messages [ 1692.511783] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1694.597120] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1696.692612] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1696.699025] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 1696.716966] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6886 to 0x280000401:6913) [ 1696.717137] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6886 to 0x2c0000401:6913) [ 1697.351334] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1697.772281] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1759.547886] Lustre: Failing over lustre-MDT0000 [ 1759.689784] Lustre: server umount lustre-MDT0000 complete [ 1761.794784] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1761.800207] Lustre: Skipped 3 previous similar messages [ 1771.685476] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1771.718865] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1771.812084] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1771.828474] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1772.641313] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1774.724971] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1776.819915] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1776.822382] Lustre: Skipped 3 previous similar messages [ 1776.829375] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 1776.833888] Lustre: 6926:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009cf18380 x1796656280509120/t30064797704(0) o36->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:298/0 lens 512/2888 e 0 to 0 dl 1713426743 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1776.847241] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:11333 to 0x280000401:11361) [ 1776.847248] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:11332 to 0x2c0000401:11361) [ 1777.461322] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1777.920484] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1839.664131] Lustre: Failing over lustre-MDT0000 [ 1839.748065] LustreError: 3483:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800a4f65180 x1796656279080768/t0(0) o6->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 544/432 e 0 to 0 dl 0 ref 1 fl Rpc:QU/200/ffffffff rc 0/-1 job:'osp-syn-0-0.0' uid:0 gid:0 [ 1839.826980] Lustre: server umount lustre-MDT0000 complete [ 1839.828715] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.55@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1839.828716] LustreError: Skipped 23 previous similar messages [ 1841.938671] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1841.943304] Lustre: Skipped 4 previous similar messages [ 1851.791591] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1851.830990] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1851.925615] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1851.944101] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1852.814586] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1854.852844] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1856.932081] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1856.934213] Lustre: Skipped 3 previous similar messages [ 1856.940272] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 1856.955119] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:15882 to 0x280000401:15905) [ 1856.955154] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:15883 to 0x2c0000401:15937) [ 1857.540512] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1857.957558] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1882.102144] Lustre: DEBUG MARKER: == recovery-small test 51: failover MDS during recovery == 03:53:17 (1713426797) [ 1883.839785] Lustre: Failing over lustre-MDT0000 [ 1883.916967] Lustre: server umount lustre-MDT0000 complete [ 1895.860992] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1896.841346] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1897.691736] Lustre: DEBUG MARKER: test_51: failover in 1 sec [ 1899.183329] Lustre: Failing over lustre-MDT0000 [ 1899.197575] LustreError: 4530:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1899.200675] Lustre: 3947:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1899.203600] Lustre: 3947:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1899.205934] Lustre: lustre-MDT0000-osd: cancel update llog [0x200002b10:0x1:0x0] [ 1899.211537] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1899.215118] LustreError: 3947:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800a8275c00 x1796656279591488/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1899.219982] LustreError: 3947:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1899.222631] LustreError: 3947:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1899.237779] Lustre: 3947:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1899.294069] Lustre: server umount lustre-MDT0000 complete [ 1911.252739] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1912.144299] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1912.968280] Lustre: DEBUG MARKER: test_51: failover in 5 sec [ 1915.956783] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1916.376759] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1916.393310] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:17747 to 0x280000401:17793) [ 1916.396764] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:17779 to 0x2c0000401:17825) [ 1918.486978] Lustre: Failing over lustre-MDT0000 [ 1918.519682] LustreError: 5677:0:(ldlm_resource.c:1128:ldlm_resource_complain()) mdt-lustre-MDT0000_UUID: namespace resource [0x200000405:0x87d7:0x0].0xba8107b7 (ffff8800b18ba100) refcount nonzero (2) after lock cleanup; forcing cleanup. [ 1918.563055] Lustre: server umount lustre-MDT0000 complete [ 1930.569957] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1930.603929] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1930.606578] LustreError: Skipped 2 previous similar messages [ 1931.542159] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1932.390306] Lustre: DEBUG MARKER: test_51: failover in 10 sec [ 1935.750301] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:17951 to 0x280000401:17985) [ 1935.750310] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:17983 to 0x2c0000401:18017) [ 1942.914699] Lustre: Failing over lustre-MDT0000 [ 1942.925092] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.55@tcp (stopping) [ 1943.026681] Lustre: server umount lustre-MDT0000 complete [ 1954.973961] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1955.896429] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1956.726826] Lustre: DEBUG MARKER: test_51: failover in 20 sec [ 1960.101584] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:18551 to 0x2c0000401:18593) [ 1960.102226] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:18518 to 0x280000401:18561) [ 1977.233521] Lustre: Failing over lustre-MDT0000 [ 1977.372704] Lustre: server umount lustre-MDT0000 complete [ 1980.115024] Lustre: lustre-MDT0000-lwp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1980.119546] Lustre: Skipped 14 previous similar messages [ 1989.379605] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1989.505119] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1989.507884] Lustre: Skipped 4 previous similar messages [ 1989.525632] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1989.527278] Lustre: Skipped 6 previous similar messages [ 1990.340646] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 1991.012396] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1991.015782] Lustre: Skipped 2 previous similar messages [ 1991.186326] Lustre: DEBUG MARKER: test_51: failover in 25 sec [ 1994.516690] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 1994.520835] Lustre: Skipped 15 previous similar messages [ 1994.528094] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1994.532325] Lustre: Skipped 2 previous similar messages [ 1994.546167] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:19808 to 0x280000401:19841) [ 1994.546172] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:19840 to 0x2c0000401:19873) [ 2016.761971] Lustre: Failing over lustre-MDT0000 [ 2016.874396] Lustre: server umount lustre-MDT0000 complete [ 2028.939125] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2029.911512] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 2030.795991] Lustre: DEBUG MARKER: test_51: failover in 30 sec [ 2034.103076] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:21369 to 0x280000401:21409) [ 2034.107237] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:21402 to 0x2c0000401:21441) [ 2061.332160] Lustre: Failing over lustre-MDT0000 [ 2061.445590] Lustre: server umount lustre-MDT0000 complete [ 2073.599376] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2073.640787] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2073.645985] LustreError: Skipped 3 previous similar messages [ 2074.673705] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 2078.774508] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:23357 to 0x280000401:23393) [ 2078.774526] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:23388 to 0x2c0000401:23425) [ 2098.005724] Lustre: DEBUG MARKER: == recovery-small test 52: failover OST under load ======= 03:56:53 (1713427013) [ 2108.778787] Lustre: Failing over lustre-OST0000 [ 2108.793337] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.201.55@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2108.797627] LustreError: Skipped 87 previous similar messages [ 2108.807977] Lustre: server umount lustre-OST0000 complete [ 2109.154255] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2120.685512] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2120.689070] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2121.220621] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 2121.224504] Lustre: Skipped 2 previous similar messages [ 2121.876583] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 2124.147280] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2124.539596] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2440.158090] Lustre: Failing over lustre-OST0000 [ 2440.668462] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2440.672147] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2440.677341] Lustre: Skipped 11 previous similar messages [ 2440.679483] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2441.732868] Lustre: lustre-OST0000: Not available for connect from 192.168.201.55@tcp (stopping) [ 2441.736485] Lustre: Skipped 1 previous similar message [ 2442.181361] Lustre: server umount lustre-OST0000 complete [ 2454.239541] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2454.243570] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2454.307509] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2454.310663] Lustre: Skipped 3 previous similar messages [ 2454.315192] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 2454.319409] Lustre: Skipped 3 previous similar messages [ 2455.586016] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 2455.675533] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 2456.203001] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 2456.203143] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 2456.203145] Lustre: Skipped 3 previous similar messages [ 2456.213708] Lustre: Skipped 13 previous similar messages [ 2458.183612] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2458.645346] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2771.187334] Lustre: Failing over lustre-OST0000 [ 2771.207292] Lustre: lustre-OST0000: Not available for connect from 192.168.201.55@tcp (stopping) [ 2771.714468] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 2773.209885] Lustre: server umount lustre-OST0000 complete [ 2774.819013] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2774.823388] LustreError: Skipped 15 previous similar messages [ 2785.269645] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2785.272640] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2785.326292] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 2786.560686] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 2787.162226] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 2788.867754] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2789.285841] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3009.041540] Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x2c0000400 to 0x2c0000402 [ 3062.618182] Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x280000400 to 0x280000bd0 [ 3064.283659] Lustre: DEBUG MARKER: == recovery-small test 53a: touch: drop rep ============== 04:12:59 (1713427979) [ 3064.615628] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3064.617763] Lustre: Skipped 3 previous similar messages [ 3064.618792] LustreError: 6925:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800934bdc00 x1796656355573376/t0(0) o101->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:37/0 lens 576/688 e 0 to 0 dl 1713427992 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3064.624562] LustreError: 6925:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 3080.627030] Lustre: lustre-MDT0000: Client fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp) reconnecting [ 3080.631460] Lustre: Skipped 4 previous similar messages [ 3083.767107] Lustre: DEBUG MARKER: == recovery-small test 53b: touch: drop rep ============== 04:13:19 (1713427999) [ 3084.116954] LustreError: 6926:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880074ef4380 x1796656355579712/t0(0) o101->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:56/0 lens 576/688 e 0 to 0 dl 1713428011 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3103.277954] Lustre: DEBUG MARKER: == recovery-small test 53c: touch: drop rep ============== 04:13:38 (1713428018) [ 3103.597770] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3103.599885] Lustre: Skipped 1 previous similar message [ 3103.600913] LustreError: 6925:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009f6b3100 x1796656355581568/t64424516842(0) o101->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:76/0 lens 664/664 e 0 to 0 dl 1713428031 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3119.597797] Lustre: 698:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012bb1d880 x1796656355581568/t64424516842(0) o101->fa16e676-c361-4e4f-a826-6f09f8cf22b4@192.168.201.55@tcp:92/0 lens 664/3488 e 0 to 0 dl 1713428047 ref 1 fl Interpret:H/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3122.768344] Lustre: DEBUG MARKER: == recovery-small test 54: back in time ================== 04:13:58 (1713428038) [ 3133.365141] Lustre: Failing over lustre-MDT0000 [ 3133.424471] Lustre: server umount lustre-MDT0000 complete [ 3135.538520] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3135.541729] LustreError: Skipped 1 previous similar message [ 3135.543509] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3135.547809] Lustre: Skipped 3 previous similar messages [ 3145.357155] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3145.383213] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3145.453893] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3145.466362] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3145.467982] Lustre: Skipped 1 previous similar message [ 3146.166177] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3147.877063] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 3147.879536] Lustre: Skipped 1 previous similar message [ 3150.468604] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 3150.470502] Lustre: Skipped 3 previous similar messages [ 3150.488452] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24622 to 0x280000401:24641) [ 3150.488469] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:24655 to 0x2c0000401:24673) [ 3151.020145] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3151.390523] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3155.511686] Lustre: DEBUG MARKER: == recovery-small test 55: ost_brw_read/write drops timed-out read/write request ========================================================== 04:14:31 (1713428071) [ 3160.388074] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3160.389905] Lustre: Skipped 3 previous similar messages [ 3160.391804] LustreError: 19779:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.55@tcp because locking object 0x280000bd0:3 took 0 seconds (limit was 11). [ 3160.398408] Lustre: lustre-OST0000: Bulk IO write error with fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp), client will retry: rc = -110 [ 3176.703206] Lustre: lustre-OST0000: Client fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp) reconnecting [ 3176.706909] Lustre: Skipped 2 previous similar messages [ 3176.710803] LustreError: 9200:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.55@tcp because locking object 0x280000bd0:3 took 0 seconds (limit was 11). [ 3176.710817] Lustre: lustre-OST0000: Bulk IO write error with fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp), client will retry: rc = -110 [ 3176.710818] Lustre: Skipped 8 previous similar messages [ 3176.720123] LustreError: 9200:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 16 previous similar messages [ 3192.713024] LustreError: 9200:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.55@tcp because locking object 0x280000bd0:3 took 0 seconds (limit was 11). [ 3192.713167] Lustre: lustre-OST0000: Bulk IO write error with fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp), client will retry: rc = -110 [ 3192.713168] Lustre: Skipped 8 previous similar messages [ 3192.724869] LustreError: 9200:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3208.719011] LustreError: 9202:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.55@tcp because locking object 0x280000bd0:3 took 0 seconds (limit was 11). [ 3208.719033] Lustre: lustre-OST0000: Bulk IO write error with fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp), client will retry: rc = -110 [ 3208.719034] Lustre: Skipped 8 previous similar messages [ 3208.726903] LustreError: 9202:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3224.698692] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3224.698789] LustreError: 17122:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.201.55@tcp because locking object 0x280000bd0:2 took 0 seconds (limit was 11). [ 3224.698791] LustreError: 17122:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 1 previous similar message [ 3224.698802] Lustre: lustre-OST0000: Bulk IO write error with fa16e676-c361-4e4f-a826-6f09f8cf22b4 (at 192.168.201.55@tcp), client will retry: rc = -110 [ 3224.698803] Lustre: Skipped 9 previous similar messages [ 3224.708843] Lustre: Skipped 45 previous similar messages [ 3247.536474] Lustre: DEBUG MARKER: == recovery-small test 56: do not fail on getattr resend ========================================================== 04:16:02 (1713428162) [ 3247.893967] LustreError: 6925:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3287.896977] LustreError: 6925:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 awake [ 3291.474042] Lustre: DEBUG MARKER: == recovery-small test 57: read procfs entries causes kernel crash ========================================================== 04:16:46 (1713428206) [ 3293.183977] Lustre: Failing over lustre-MDT0000 [ 3293.245522] Lustre: server umount lustre-MDT0000 complete [ 3295.478584] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3295.509214] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3295.598761] Lustre: lustre-MDT0000: Aborting client recovery [ 3295.600795] LustreError: 23953:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3295.603825] Lustre: 23983:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3295.606262] Lustre: 23983:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3295.608130] Lustre: 23983:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 3295.611007] Lustre: 23983:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 2 previous similar messages [ 3295.613192] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 3295.616265] Lustre: lustre-MDT0000-osd: cancel update llog [0x200004a50:0x1:0x0] [ 3295.622511] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 3295.643485] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:24655 to 0x2c0000401:24705) [ 3295.644287] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24643 to 0x280000401:24673) [ 3296.424037] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3300.598154] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 3300.602296] LustreError: Skipped 1 previous similar message [ 3308.930704] Lustre: DEBUG MARKER: == recovery-small test 58: Eviction in the middle of open RPC reply processing ========================================================== 04:17:04 (1713428224) [ 3326.069077] Lustre: 11066:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713428226/real 1713428226] req@ffff8800a8384380 x1796656295780608/t0(0) o104->lustre-MDT0000@192.168.201.55@tcp:15/16 lens 328/224 e 0 to 1 dl 1713428242 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 3329.255572] Lustre: DEBUG MARKER: == recovery-small test 59: Read cancel race on client eviction ========================================================== 04:17:24 (1713428244) [ 3339.568583] LustreError: 17311:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.55@tcp) returned error from blocking AST (req@ffff88006e442680 x1796656295787136 status -107 rc -107), evict it ns: filter-lustre-OST0001_UUID lock: ffff880093449440/0xf24e407168b03dec lrc: 4/0,0 mode: PW/PW res: [0x2c0000401:0x6082:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01746545 expref: 5 pid: 18799 timeout: 3439 lvb_type: 0 [ 3339.581270] LustreError: 138-a: lustre-OST0001: A client on nid 192.168.201.55@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3339.586335] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.201.55@tcp ns: filter-lustre-OST0001_UUID lock: ffff880093449440/0xf24e407168b03dec lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0x6082:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01746545 expref: 6 pid: 18799 timeout: 0 lvb_type: 0 [ 3339.595710] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 3342.613500] Lustre: DEBUG MARKER: == recovery-small test 60: Add Changelog entries during MDS failover ========================================================== 04:17:38 (1713428258) [ 3342.649741] LustreError: 698:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.201.55@tcp) returned error from blocking AST (req@ffff8800a8387480 x1796656295788224 status -107 rc -107), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff8800a8b54b40/0xf24e407168b03e08 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01746553 expref: 6 pid: 9217 timeout: 3442 lvb_type: 0 [ 3342.657620] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.201.55@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3342.659844] LustreError: 6916:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.201.55@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff8800a8b54b40/0xf24e407168b03e08 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01746553 expref: 7 pid: 9217 timeout: 0 lvb_type: 0 [ 3343.406477] Lustre: lustre-MDD0000: changelog on [ 3344.178139] Lustre: lustre-MDD0001: changelog on [ 3360.674284] Lustre: lustre-OST0000: haven't heard from client 30924ce3-09b3-46ba-acc3-3c949330c6c0 (at 192.168.201.55@tcp) in 32 seconds. I think it's dead, and I am evicting it. exp ffff88012bbb4000, cur 1713428277 expire 1713428247 last 1713428245 [ 3363.035235] Lustre: Failing over lustre-MDT0000 [ 3363.039841] Lustre: lustre-MDT0000: Not available for connect from 192.168.201.55@tcp (stopping) [ 3363.042985] Lustre: Skipped 3 previous similar messages [ 3363.105292] Lustre: server umount lustre-MDT0000 complete [ 3375.108497] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3375.137009] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3375.226806] Lustre: lustre-MDD0000: changelog on [ 3375.923738] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3380.212874] LustreError: 3482:0:(import.c:1314:ptlrpc_connect_interpret()) lustre-MDT0000_UUID: went back in time (transno 68719476743 was previously committed, server now claims 64424516848)! [ 3380.216027] LustreError: 3482:0:(import.c:1316:ptlrpc_connect_interpret()) For further information, see http://doc.lustre.org/lustre_manual.xhtml#went_back_in_time [ 3380.222043] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 3380.224628] Lustre: Skipped 1 previous similar message [ 3380.237464] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25925 to 0x280000401:25953) [ 3380.237465] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:25957 to 0x2c0000401:25985) [ 3399.363915] Lustre: lustre-MDD0000: changelog off [ 3400.182601] Lustre: lustre-MDD0001: changelog off [ 3404.666461] Lustre: DEBUG MARKER: == recovery-small test 61: Verify to not reuse orphan objects - bug 17025 ========================================================== 04:18:40 (1713428320) [ 3406.885087] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3407.644353] Lustre: Failing over lustre-MDT0000 [ 3407.715085] Lustre: server umount lustre-MDT0000 complete [ 3409.468183] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.201.55@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3409.474125] LustreError: Skipped 29 previous similar messages [ 3411.515937] LDISKFS-fs (dm-0): recovery complete [ 3411.517115] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3411.637254] Lustre: lustre-MDT0000: Aborting client recovery [ 3411.638524] LustreError: 31278:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3411.638874] Lustre: 31307:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3411.638876] Lustre: 31307:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3411.648322] Lustre: 31307:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client d7ac4824-6b85-4f89-a856-7b11a050da2d@ [ 3411.651466] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 3411.654283] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000088d0:0x1:0x0] [ 3411.658569] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000407:0x1:0x0] [ 3411.677991] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25925 to 0x280000401:25985) [ 3411.677993] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:25957 to 0x2c0000401:26017) [ 3412.408730] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3416.631011] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 3425.824958] Lustre: DEBUG MARKER: == recovery-small test 65: lock enqueue for destroyed export ========================================================== 04:19:01 (1713428341) [ 3426.187504] LustreError: 12019:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3426.190972] Lustre: *** cfs_fail_loc=31e, val=0*** [ 3426.193229] Lustre: Skipped 3 previous similar messages [ 3428.188078] LustreError: 17873:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3430.409202] Lustre: 32669:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting d7ac4824-6b85-4f89-a856-7b11a050da2d at adminstrative request [ 3430.413006] LustreError: 6915:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 3432.189996] LustreError: 12019:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e awake [ 3432.191864] LustreError: 12019:0:(ldlm_lockd.c:1499:ldlm_handle_enqueue()) ### lock on destroyed export ffff88009efaf800 ns: filter-lustre-OST0000_UUID lock: ffff88008da37180/0xf24e407168b6810f lrc: 3/0,0 mode: --/PW res: [0x280000401:0x6583:0x0].0x0 rrc: 4 type: EXT [0->4095] (req 0->4095) gid 0 flags: 0x70000000020020 nid: 192.168.201.55@tcp remote: 0xdb2b24a017527af expref: 3 pid: 12019 timeout: 0 lvb_type: 0 [ 3432.689978] LustreError: 17873:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout interrupted [ 3442.201491] Lustre: lustre-OST0000: Client cbcd5791-0cf9-4903-b50a-0db09e5a0779 (at 192.168.201.55@tcp) reconnecting [ 3442.204123] Lustre: Skipped 6 previous similar messages [ 3446.057728] Lustre: DEBUG MARKER: == recovery-small test 66: lock enqueue re-send vs client eviction ========================================================== 04:19:21 (1713428361) [ 3446.555620] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3446.559316] LustreError: 9217:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008c497b80 x1796656357723968/t0(0) o101->d7ac4824-6b85-4f89-a856-7b11a050da2d@192.168.201.55@tcp:462/0 lens 576/688 e 0 to 0 dl 1713428417 ref 1 fl Interpret:/200/0 rc 0/0 job:'stat.0' uid:0 gid:0 [ 3448.545129] LustreError: 6925:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3450.890655] Lustre: 1159:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting d7ac4824-6b85-4f89-a856-7b11a050da2d at adminstrative request [ 3451.248002] LustreError: 6925:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout interrupted [ 3451.255284] LustreError: 6925:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) Skipped 1 previous similar message [ 3455.257492] Lustre: DEBUG MARKER: == recovery-small test 67: connect vs import invalidate race ========================================================== 04:19:30 (1713428370) [ 3457.632972] Lustre: 1940:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting d7ac4824-6b85-4f89-a856-7b11a050da2d at adminstrative request [ 3471.943950] Lustre: DEBUG MARKER: == recovery-small test 100: IR: Make sure normal recovery still works w/o IR ========================================================== 04:19:47 (1713428387) [ 3473.303770] Lustre: Failing over lustre-OST0000 [ 3473.331303] Lustre: server umount lustre-OST0000 complete [ 3476.706457] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3485.448386] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3485.454555] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3487.190753] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3491.262043] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3491.664309] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3496.757395] Lustre: DEBUG MARKER: == recovery-small test 101a: IR: Make sure IR works w/o normal recovery ========================================================== 04:20:12 (1713428412) [ 3497.858170] Lustre: Failing over lustre-OST0000 [ 3497.876487] Lustre: server umount lustre-OST0000 complete [ 3509.917263] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3509.920148] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3509.972230] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3511.102007] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3513.297270] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3513.649706] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3518.222516] Lustre: DEBUG MARKER: == recovery-small test 101b: IR: Make sure IR works w/o normal recovery and proceed EAGAIN ========================================================== 04:20:33 (1713428433) [ 3519.426721] Lustre: Failing over lustre-OST0000 [ 3519.443162] Lustre: server umount lustre-OST0000 complete [ 3531.634481] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3531.641855] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3531.748151] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3531.757131] LustreError: 8021:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 sleeping for 25000ms [ 3556.762009] LustreError: 8021:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 awake [ 3557.912696] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3560.085144] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3560.447142] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3564.773397] Lustre: DEBUG MARKER: == recovery-small test 102: IR: New client gets updated nidtbl after MGS restart ========================================================== 04:21:20 (1713428480) [ 3565.669087] Lustre: Failing over lustre-OST0000 [ 3565.690881] Lustre: server umount lustre-OST0000 complete [ 3577.830461] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3577.835157] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3577.893073] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3579.027793] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3581.239532] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3581.596686] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3583.903015] Lustre: Failing over lustre-MDT0000 [ 3583.955980] Lustre: server umount lustre-MDT0000 complete [ 3585.724307] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3585.754449] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3585.756678] LustreError: Skipped 1 previous similar message [ 3586.527302] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3587.443566] Lustre: Failing over lustre-OST0000 [ 3587.456950] Lustre: server umount lustre-OST0000 complete [ 3589.634507] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3590.851128] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26019 to 0x2c0000401:26049) [ 3599.417488] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3599.420406] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3600.682238] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3601.329468] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25989 to 0x280000401:26017) [ 3602.872773] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3607.147163] Lustre: DEBUG MARKER: == recovery-small test 103: IR: MDS can start w/o MGS and get updated nidtbl later ========================================================== 04:22:02 (1713428522) [ 3607.739229] Lustre: DEBUG MARKER: SKIP: recovery-small test_103 needs separate mgs and mds [ 3609.567971] Lustre: DEBUG MARKER: == recovery-small test 104: IR: ost can disable IR voluntarily ========================================================== 04:22:05 (1713428525) [ 3610.458484] Lustre: Failing over lustre-OST0000 [ 3610.474228] Lustre: server umount lustre-OST0000 complete [ 3612.549995] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3612.553175] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3613.775215] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3618.793925] Lustre: DEBUG MARKER: == recovery-small test 105: IR: NON IR clients support === 04:22:14 (1713428534) [ 3619.117086] Lustre: DEBUG MARKER: SKIP: recovery-small test_105 Needs multiple clients [ 3620.920504] Lustre: DEBUG MARKER: == recovery-small test 106: lightweight connection support ========================================================== 04:22:16 (1713428536) [ 3623.317957] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3623.799669] Lustre: Failing over lustre-MDT0000 [ 3623.864774] Lustre: server umount lustre-MDT0000 complete [ 3636.714745] LDISKFS-fs (dm-0): recovery complete [ 3636.716522] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3637.515234] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3641.811215] LustreError: 18789:0:(ldlm_lockd.c:968:ldlm_server_blocking_ast()) ### BUG 6063: lock collide during recovery ns: mdt-lustre-MDT0000_UUID lock: ffff880093664b40/0xf24e407168b68e0c lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x40200000000020 nid: 192.168.201.55@tcp remote: 0xdb2b24a01752ba6 expref: 7 pid: 11066 timeout: 0 lvb_type: 0 [ 3641.839242] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26051 to 0x2c0000401:26081) [ 3641.839269] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25989 to 0x280000401:26049) [ 3645.178045] Lustre: DEBUG MARKER: == recovery-small test 107: drop reint reply, then restart MDT ========================================================== 04:22:40 (1713428560) [ 3645.428647] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 3645.430453] LustreError: 11066:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009053a680 x1796656357756800/t90194313220(0) o36->8a5f799f-2c06-4059-a72b-5e123bdfc164@192.168.201.55@tcp:661/0 lens 552/448 e 0 to 0 dl 1713428616 ref 1 fl Interpret:/200/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3646.086308] Lustre: Failing over lustre-MDT0000 [ 3646.150579] Lustre: server umount lustre-MDT0000 complete [ 3658.033127] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3658.844427] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 3663.161882] Lustre: 6926:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a8384a80 x1796656357756800/t90194313220(0) o36->8a5f799f-2c06-4059-a72b-5e123bdfc164@192.168.201.55@tcp:679/0 lens 552/2880 e 0 to 0 dl 1713428634 ref 1 fl Interpret:/202/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3663.172504] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26051 to 0x2c0000401:26113) [ 3663.172520] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25989 to 0x280000401:26081) [ 3663.687497] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3664.039928] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3668.139168] Lustre: DEBUG MARKER: == recovery-small test 108: client eviction don't crash == 04:23:03 (1713428583) [ 3668.501860] Lustre: 21944:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 8a5f799f-2c06-4059-a72b-5e123bdfc164 at adminstrative request [ 3675.456781] Lustre: DEBUG MARKER: == recovery-small test 110a: create remote directory: drop client req ========================================================== 04:23:10 (1713428590) [ 3676.161580] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 3739.378763] Lustre: DEBUG MARKER: == recovery-small test 110b: create remote directory: drop Master rep ========================================================== 04:24:14 (1713428654) [ 3739.641943] LustreError: 9217:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a5191880 x1796656357770368/t4295384750(0) o36->8a5f799f-2c06-4059-a72b-5e123bdfc164@192.168.201.55@tcp:1/0 lens 560/536 e 0 to 0 dl 1713428711 ref 1 fl Interpret:/200/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 3799.637690] Lustre: lustre-MDT0001: Client 8a5f799f-2c06-4059-a72b-5e123bdfc164 (at 192.168.201.55@tcp) reconnecting [ 3799.640722] Lustre: Skipped 3 previous similar messages [ 3799.643851] Lustre: 6925:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009c96a680 x1796656357770368/t4295384750(0) o36->8a5f799f-2c06-4059-a72b-5e123bdfc164@192.168.201.55@tcp:61/0 lens 560/2880 e 0 to 0 dl 1713428771 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 3802.867782] Lustre: DEBUG MARKER: == recovery-small test 110c: create remote directory: drop update rep on slave MDT ========================================================== 04:25:18 (1713428718) [ 3819.144043] Lustre: 8072:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713428719/real 1713428719] req@ffff88008abe2300 x1796656296274048/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 264/4320 e 0 to 1 dl 1713428735 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 3819.153979] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3819.160187] Lustre: Skipped 39 previous similar messages [ 3819.163620] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 3819.167628] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 3819.169536] Lustre: Skipped 38 previous similar messages [ 3822.342601] Lustre: DEBUG MARKER: == recovery-small test 110d: remove remote directory: drop client req ========================================================== 04:25:37 (1713428737) [ 3886.033893] Lustre: DEBUG MARKER: == recovery-small test 110e: remove remote directory: drop master rep ========================================================== 04:26:41 (1713428801) [ 3886.352201] LustreError: 698:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009f212680 x1796656357785920/t4295384769(0) o36->8a5f799f-2c06-4059-a72b-5e123bdfc164@192.168.201.55@tcp:147/0 lens 496/456 e 0 to 0 dl 1713428857 ref 1 fl Interpret:/200/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 3886.356906] LustreError: 698:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 3946.342799] Lustre: 11066:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012e8bfb80 x1796656357785920/t4295384769(0) o36->8a5f799f-2c06-4059-a72b-5e123bdfc164@192.168.201.55@tcp:207/0 lens 496/2888 e 0 to 0 dl 1713428917 ref 1 fl Interpret:/202/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 3949.360536] Lustre: DEBUG MARKER: == recovery-small test 110f: remove remote directory: drop slave rep ========================================================== 04:27:44 (1713428864) [ 3949.681779] Lustre: *** cfs_fail_loc=1701, val=2147483648*** [ 3949.683943] Lustre: Skipped 3 previous similar messages [ 3965.681050] Lustre: 8072:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713428866/real 1713428866] req@ffff88009372d880 x1796656296318016/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1792/4320 e 0 to 1 dl 1713428882 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 3965.688177] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 3968.767349] Lustre: DEBUG MARKER: == recovery-small test 110g: drop reply during migration ========================================================== 04:28:04 (1713428884) [ 4032.298165] Lustre: DEBUG MARKER: == recovery-small test 110h: drop update reply during cross-MDT file rename ========================================================== 04:29:07 (1713428947) [ 4048.748995] Lustre: 8072:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713428949/real 1713428949] req@ffff88009f213480 x1796656296345344/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1816/4320 e 0 to 1 dl 1713428965 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4048.755198] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4052.002111] Lustre: DEBUG MARKER: == recovery-small test 110i: drop update reply during cross-MDT dir rename ========================================================== 04:29:27 (1713428967) [ 4068.350017] Lustre: 8072:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713428968/real 1713428968] req@ffff88009f93e680 x1796656296353536/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 2680/4320 e 0 to 1 dl 1713428984 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4068.357922] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4071.580567] Lustre: DEBUG MARKER: == recovery-small test 110j: drop update reply during cross-MDT ln ========================================================== 04:29:47 (1713428987) [ 4087.930039] Lustre: 8072:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713428988/real 1713428988] req@ffff880093671c00 x1796656296360320/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1488/4320 e 0 to 1 dl 1713429004 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4087.939679] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4091.310677] Lustre: DEBUG MARKER: == recovery-small test 110k: FID_QUERY failed during recovery ========================================================== 04:30:06 (1713429006) [ 4091.894933] Lustre: Failing over lustre-MDT0001 [ 4091.978985] Lustre: server umount lustre-MDT0001 complete [ 4092.947041] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4092.949910] LustreError: Skipped 67 previous similar messages [ 4094.724169] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4094.867825] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 4094.881473] Lustre: *** cfs_fail_loc=1103, val=0*** [ 4094.887809] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 4094.888042] Lustre: lustre-MDT0001: Aborting client recovery [ 4094.888046] LustreError: 31593:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0001: Aborting recovery [ 4094.897210] Lustre: Skipped 16 previous similar messages [ 4094.898929] Lustre: 31615:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4094.902874] Lustre: 31615:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 4096.892927] LustreError: 31614:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0000-osp-MDT0001: get update log duration 2, retries 0, failed: rc = -108 [ 4096.896846] Lustre: 31615:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0001: disconnect stale client lustre-MDT0000-mdtlov_UUID@ [ 4096.901699] Lustre: 31615:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 4096.905456] Lustre: lustre-MDT0001: disconnecting 1 stale clients [ 4096.908429] Lustre: 31615:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4096.911710] Lustre: lustre-MDT0001-osd: cancel update llog [0x240000400:0x1:0x0] [ 4096.917309] Lustre: lustre-MDT0000-osp-MDT0001: cancel update llog [0x200000401:0x1:0x0] [ 4096.936630] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:5394 to 0x2c0000402:5441) [ 4096.937497] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:1254 to 0x280000bd0:1537) [ 4097.643982] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4098.965506] Lustre: Failing over lustre-MDT0001 [ 4099.036169] Lustre: server umount lustre-MDT0001 complete [ 4101.440766] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4101.545754] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 4101.558232] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:5394 to 0x2c0000402:5473) [ 4101.558234] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:1254 to 0x280000bd0:1569) [ 4102.344511] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4106.549745] LustreError: 167-0: lustre-MDT0001-osp-MDT0000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. [ 4113.604607] Lustre: DEBUG MARKER: == recovery-small test 110m: update resent vs original RPC race ========================================================== 04:30:29 (1713429029) [ 4114.183554] LustreError: 8074:0:(out_handler.c:1172:out_handle()) cfs_race id 525 sleeping [ 4118.280188] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4118.285296] LustreError: 21550:0:(service.c:1855:ptlrpc_server_request_add()) cfs_fail_race id 525 waking [ 4118.290853] LustreError: 8074:0:(out_handler.c:1172:out_handle()) cfs_fail_race id 525 awake: rc=896 [ 4122.293265] LustreError: 21550:0:(out_handler.c:1172:out_handle()) cfs_fail_race id 525 waking [ 4125.392452] Lustre: DEBUG MARKER: == recovery-small test 111: mdd setup fail should not cause umount oops ========================================================== 04:30:40 (1713429040) [ 4126.128382] Lustre: Failing over lustre-MDT0000 [ 4126.197790] Lustre: server umount lustre-MDT0000 complete [ 4128.501030] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4128.536441] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4128.541346] LustreError: Skipped 2 previous similar messages [ 4128.624465] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4128.627346] Lustre: Skipped 9 previous similar messages [ 4128.636442] Lustre: *** cfs_fail_loc=151, val=0*** [ 4128.638670] LustreError: 3350:0:(mdd_device.c:687:mdd_changelog_init()) lustre-MDD0000: changelog setup during init failed: rc = -5 [ 4128.642310] LustreError: 3350:0:(mdd_device.c:1402:mdd_prepare()) lustre-MDD0000: failed to initialize changelog: rc = -5 [ 4128.646783] LustreError: 3350:0:(tgt_mount.c:2223:server_fill_super()) Unable to start targets: -5 [ 4128.651066] Lustre: Failing over lustre-MDT0000 [ 4128.653970] LustreError: 3378:0:(llog_osd.c:247:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: can't read llog [0x240000409:0x1:0x0] header: rc = -5 [ 4128.658814] LustreError: 3378:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 0, retries 0, failed: rc = -5 [ 4128.728059] Lustre: server umount lustre-MDT0000 complete [ 4128.730474] LustreError: 3350:0:(super25.c:189:lustre_fill_super()) llite: Unable to mount : rc = -5 [ 4130.761914] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4130.801613] LustreError: 3918:0:(ldlm_resource.c:1128:ldlm_resource_complain()) MGC192.168.201.155@tcp: namespace resource [0x65727473756c:0x0:0x0].0x0 (ffff88012bfc4a00) refcount nonzero (2) after lock cleanup; forcing cleanup. [ 4130.808868] LustreError: 3918:0:(ldlm_resource.c:1128:ldlm_resource_complain()) Skipped 36 previous similar messages [ 4130.813177] LustreError: 6923:0:(mgc_request.c:627:do_requeue()) failed processing log: -5 [ 4131.710592] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4132.996700] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4133.000208] Lustre: Skipped 10 previous similar messages [ 4135.348687] Lustre: DEBUG MARKER: == recovery-small test 112a: bulk resend while orignal request is in progress ========================================================== 04:30:50 (1713429050) [ 4135.913894] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 4135.916808] Lustre: Skipped 9 previous similar messages [ 4135.931912] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26085 to 0x280000401:26113) [ 4135.931983] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26116 to 0x2c0000401:26145) [ 4135.980353] LustreError: 9201:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 20000ms [ 4156.002012] LustreError: 9201:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 4164.031727] Lustre: DEBUG MARKER: == recovery-small test 115a: read: late REQ MDunlink and no bulk ========================================================== 04:31:18 (1713429078) [ 4172.387066] Lustre: DEBUG MARKER: == recovery-small test 115b: write: late REQ MDunlink and no bulk ========================================================== 04:31:27 (1713429087) [ 4176.451375] Lustre: *** cfs_fail_loc=215, val=2*** [ 4176.453142] Lustre: Skipped 1 previous similar message [ 4180.907100] Lustre: DEBUG MARKER: == recovery-small test 115c: read: late Reply MDunlink and no bulk ========================================================== 04:31:36 (1713429096) [ 4185.749649] Lustre: DEBUG MARKER: == recovery-small test 115d: write: late Reply MDunlink and no bulk ========================================================== 04:31:41 (1713429101) [ 4190.450796] Lustre: DEBUG MARKER: == recovery-small test 115e: read: late Bulk MDunlink and no reply ========================================================== 04:31:45 (1713429105) [ 4195.333268] Lustre: DEBUG MARKER: == recovery-small test 115f: read: late REQ MDunlink and no reply ========================================================== 04:31:50 (1713429110) [ 4198.107038] LustreError: 32485:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009f310380 x1796656357834816/t0(0) o400->b475be96-a96e-4ca6-8b95-354d5b4e27f4@192.168.201.55@tcp:415/0 lens 224/224 e 0 to 0 dl 1713429125 ref 1 fl Interpret:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 4198.122040] LustreError: 32485:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 5 previous similar messages [ 4203.171103] Lustre: DEBUG MARKER: == recovery-small test 115g: read: late REQ MDunlink and Reply MDunlink ========================================================== 04:31:58 (1713429118) [ 4267.871340] Lustre: DEBUG MARKER: == recovery-small test 120: flock race: completion vs. evict ========================================================== 04:33:03 (1713429183) [ 4270.349699] Lustre: 10963:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting b475be96-a96e-4ca6-8b95-354d5b4e27f4 at adminstrative request [ 4284.586238] Lustre: 11103:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting b475be96-a96e-4ca6-8b95-354d5b4e27f4 at adminstrative request [ 4284.591093] Lustre: 11103:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 4305.684391] Lustre: 11313:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting b475be96-a96e-4ca6-8b95-354d5b4e27f4 at adminstrative request [ 4305.689230] Lustre: 11313:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 2 previous similar messages [ 4335.403750] Lustre: DEBUG MARKER: == recovery-small test 113: ldlm enqueue dropped reply should not cause deadlocks ========================================================== 04:34:10 (1713429250) [ 4407.413376] Lustre: DEBUG MARKER: == recovery-small test 130a: enqueue resend on not existing file ========================================================== 04:35:22 (1713429322) [ 4408.390827] LustreError: 9217:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4418.425021] LustreError: 9217:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4468.413942] Lustre: lustre-MDT0000: Client b475be96-a96e-4ca6-8b95-354d5b4e27f4 (at 192.168.201.55@tcp) reconnecting [ 4468.418558] Lustre: Skipped 5 previous similar messages [ 4474.398494] Lustre: DEBUG MARKER: == recovery-small test 130b: enqueue resend on a stale inode ========================================================== 04:36:29 (1713429389) [ 4475.305126] LustreError: 11066:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4485.337095] LustreError: 11066:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4485.343970] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 4485.347484] Lustre: Skipped 7 previous similar messages [ 4535.311466] Lustre: *** cfs_fail_loc=217, val=0*** [ 4540.859284] Lustre: DEBUG MARKER: == recovery-small test 130c: layout intent resend on a stale inode ========================================================== 04:37:36 (1713429456) [ 4553.536230] LustreError: 8781:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4569.351163] Lustre: DEBUG MARKER: == recovery-small test 132: long punch =================== 04:38:04 (1713429484) [ 4642.034051] Lustre: ll_ost_io00_000: service thread pid 9200 was inactive for 72.048 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4642.045532] Pid: 9200, comm: ll_ost_io00_000 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4642.049094] Call Trace: [ 4642.050307] [<0>] __cfs_fail_timeout_set+0xe9/0x210 [libcfs] [ 4642.053095] [<0>] ofd_punch_hdl+0xa8c/0xb40 [ofd] [ 4642.054943] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4642.057242] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4642.060066] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4642.062101] [<0>] kthread+0xe4/0xf0 [ 4642.064621] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4642.069299] [<0>] 0xfffffffffffffffe [ 4690.049009] LustreError: 9200:0:(ofd_dev.c:2089:ofd_punch_hdl()) cfs_fail_timeout id 236 awake [ 4694.614201] Lustre: DEBUG MARKER: == recovery-small test 131: IO vs evict results to IO under staled lock ========================================================== 04:40:09 (1713429609) [ 4696.580104] Lustre: 16538:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting b475be96-a96e-4ca6-8b95-354d5b4e27f4 at adminstrative request [ 4696.587672] Lustre: 16538:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 3 previous similar messages [ 4696.594163] LustreError: 6914:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 4696.597550] LustreError: 6914:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) Skipped 2 previous similar messages [ 4699.401940] LustreError: 6914:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout interrupted [ 4703.029339] Lustre: DEBUG MARKER: == recovery-small test 133: don't fail on flock resend === 04:40:18 (1713429618) [ 4769.462220] Lustre: DEBUG MARKER: == recovery-small test 134: race between failover and search for reply data free slot ========================================================== 04:41:24 (1713429684) [ 4770.124123] Lustre: DEBUG MARKER: SKIP: recovery-small test_134 Need 2+ clients, have 1 [ 4773.093524] Lustre: DEBUG MARKER: == recovery-small test 135: DOM: open/create resend to return size ========================================================== 04:41:28 (1713429688) [ 4773.706479] LustreError: 6924:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009f311f80 x1796656357918912/t12884901906(0) o101->b475be96-a96e-4ca6-8b95-354d5b4e27f4@192.168.201.55@tcp:242/0 lens 648/720 e 0 to 0 dl 1713429707 ref 1 fl Interpret:/200/0 rc 301/0 job:'openfile.0' uid:0 gid:0 [ 4773.715407] LustreError: 6924:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 4 previous similar messages [ 4795.708698] Lustre: 9217:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880096feb800 x1796656357918912/t12884901906(0) o101->b475be96-a96e-4ca6-8b95-354d5b4e27f4@192.168.201.55@tcp:264/0 lens 648/3488 e 0 to 0 dl 1713429729 ref 1 fl Interpret:/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 4795.716979] Lustre: 9217:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 4798.238282] Lustre: DEBUG MARKER: SKIP: recovery-small test_136 skipping excluded test 136 [ 4799.954076] Lustre: DEBUG MARKER: == recovery-small test 137: late resend must be skipped if already applied ========================================================== 04:41:55 (1713429715) [ 4801.357709] LustreError: 6924:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_race id 525 sleeping [ 4806.361094] LustreError: 6924:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 awake: rc=0 [ 4806.382002] LustreError: 6924:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 waking [ 4824.710704] Lustre: DEBUG MARKER: == recovery-small test 138: Umount MDT during recovery === 04:42:20 (1713429740) [ 4825.825907] Lustre: Failing over lustre-MDT0000 [ 4826.083146] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4826.085171] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4826.091545] Lustre: Skipped 14 previous similar messages [ 4831.091272] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4831.096486] Lustre: Skipped 6 previous similar messages [ 4835.846297] LustreError: 20208:0:(lod_dev.c:1129:lod_process_config()) cfs_fail_timeout id 724 awake [ 4835.943434] Lustre: server umount lustre-MDT0000 complete [ 4836.099098] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4836.106136] LustreError: Skipped 11 previous similar messages [ 4848.686012] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4848.757660] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4848.763703] LustreError: Skipped 1 previous similar message [ 4848.895038] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4848.899411] Lustre: Skipped 1 previous similar message [ 4848.923680] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4848.926620] Lustre: Skipped 3 previous similar messages [ 4849.961628] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4853.893405] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 4853.898577] Lustre: Skipped 12 previous similar messages [ 4906.366720] Lustre: Failing over lustre-MDT0000 [ 4908.928025] Lustre: 20784:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4908.931497] Lustre: 20784:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 4908.997159] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4909.001864] Lustre: Skipped 3 previous similar messages [ 4909.054013] LustreError: 20783:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 60, retries 11, failed: rc = -5 [ 4909.076445] Lustre: 20784:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 4914.004478] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4914.010820] Lustre: Skipped 3 previous similar messages [ 4916.471211] Lustre: server umount lustre-MDT0000 complete [ 4919.943990] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4921.270368] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4922.139649] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 4922.142814] Lustre: lustre-MDT0000: Denying connection for new client e6fd5171-41ae-4b04-b9f2-e9b4e9bbc5d7 (at 192.168.201.55@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 4925.165500] Lustre: lustre-MDT0000: Recovery over after 0:03, of 1 clients 1 recovered and 0 were evicted. [ 4925.185457] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26128 to 0x280000401:26145) [ 4925.186386] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26158 to 0x2c0000401:26177) [ 4931.607264] Lustre: DEBUG MARKER: == recovery-small test 139: corrupted catid won't cause crash ========================================================== 04:44:06 (1713429846) [ 4932.283856] Lustre: Failing over lustre-MDT0000 [ 4932.381075] Lustre: server umount lustre-MDT0000 complete [ 4935.475122] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4935.611988] Lustre: *** cfs_fail_loc=2106, val=104*** [ 4935.613873] LustreError: 23867:0:(osp_sync.c:1415:osp_sync_llog_init()) lustre-OST0000-osc-MDT0000: the catid [0x0:0x68:0x0] for init llog 0 is bad [ 4936.605656] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4940.684137] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26158 to 0x2c0000401:26209) [ 4940.684659] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26128 to 0x280000401:26177) [ 4941.114517] Lustre: DEBUG MARKER: == recovery-small test 140a: local mount is flagged properly ========================================================== 04:44:16 (1713429856) [ 4942.303019] Lustre: lustre-MDT0000: local client 034394e5-e73c-45ff-8cfc-cb9f5c1925b8 w/o recovery [ 4942.313449] Lustre: Mounted lustre-client [ 4943.041038] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4944.401624] Lustre: Unmounted lustre-client [ 4945.692496] Lustre: Mounted lustre-client [ 4946.420949] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4947.849263] Lustre: Unmounted lustre-client [ 4952.814765] Lustre: DEBUG MARKER: == recovery-small test 140b: local mount is excluded from recovery ========================================================== 04:44:28 (1713429868) [ 4954.144827] Lustre: lustre-MDT0000: local client d2d48b87-dcc0-44d6-9e38-731615b27ef6 w/o recovery [ 4954.147632] Lustre: Skipped 1 previous similar message [ 4954.155144] Lustre: Mounted lustre-client [ 4954.943233] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4957.723077] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4958.794120] Lustre: Unmounted lustre-client [ 4959.777627] Lustre: Failing over lustre-MDT0000 [ 4959.850021] Lustre: server umount lustre-MDT0000 complete [ 4960.724613] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4960.727433] LustreError: Skipped 3 previous similar messages [ 4974.018689] LDISKFS-fs (dm-0): recovery complete [ 4974.021335] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4975.304823] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4979.235162] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26158 to 0x2c0000401:26241) [ 4979.236321] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26128 to 0x280000401:26209) [ 4980.106083] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4980.645621] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4986.906052] Lustre: DEBUG MARKER: == recovery-small test 141: do not lose locks on MGS restart ========================================================== 04:45:02 (1713429902) [ 4987.703309] Lustre: DEBUG MARKER: SKIP: recovery-small test_141 cannot run in local mode or from build tree [ 4990.377278] Lustre: DEBUG MARKER: == recovery-small test 142: orphan name stub can be cleaned up in startup ========================================================== 04:45:05 (1713429905) [ 4990.746309] Lustre: *** cfs_fail_loc=165, val=0*** [ 4991.432846] Lustre: Failing over lustre-MDT0000 [ 4991.506867] Lustre: server umount lustre-MDT0000 complete [ 4994.370174] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4995.505812] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 4999.562833] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26158 to 0x2c0000401:26273) [ 4999.562839] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26211 to 0x280000401:26241) [ 4999.563837] LustreError: 300:0:(osd_handler.c:297:osd_idc_find_or_init()) can't lookup: rc = -2 [ 5000.792850] Lustre: DEBUG MARKER: == recovery-small test 143: orphan cleanup thread shouldn't be blocked even delete failed ========================================================== 04:45:16 (1713429916) [ 5001.467932] Lustre: Failing over lustre-MDT0000 [ 5001.559610] Lustre: server umount lustre-MDT0000 complete [ 5003.774164] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 5006.786749] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5007.969548] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5009.363060] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5011.990555] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:26211 to 0x280000401:26273) [ 5011.990950] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:26158 to 0x2c0000401:26305) [ 5018.372656] Lustre: DEBUG MARKER: == recovery-small test 144a: MDT failover should stop precreation threads ========================================================== 04:45:33 (1713429933) [ 5020.243964] Lustre: Failing over lustre-OST0000 [ 5020.244202] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_create to node 0@lo failed: rc = -19 [ 5020.322222] Lustre: server umount lustre-OST0000 complete [ 5032.636724] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5032.640579] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5034.053042] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5036.935934] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5037.477454] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5099.364765] Lustre: Failing over lustre-MDT0000 [ 5099.643653] Lustre: server umount lustre-MDT0000 complete [ 5102.114519] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5111.752278] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5112.681997] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5116.888348] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:51276 to 0x2c0000401:51297) [ 5116.888354] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:51304 to 0x280000401:51329) [ 5117.523780] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5117.917601] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5119.690501] Lustre: Failing over lustre-MDT0000 [ 5119.762428] Lustre: server umount lustre-MDT0000 complete [ 5131.872994] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5132.802940] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5137.030285] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:51276 to 0x2c0000401:51329) [ 5137.030666] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:51304 to 0x280000401:51361) [ 5137.636065] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5138.040146] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5156.311885] Lustre: DEBUG MARKER: == recovery-small test 144b: orphan cleanup shouldn't be blocked for no objects+failover situation ========================================================== 04:47:51 (1713430071) [ 5158.432129] Lustre: Failing over lustre-OST0000 [ 5158.434659] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_destroy to node 0@lo failed: rc = -19 [ 5158.437992] LustreError: Skipped 1 previous similar message [ 5158.570341] Lustre: lustre-OST0000: Not available for connect from 192.168.201.55@tcp (stopping) [ 5158.924316] Lustre: server umount lustre-OST0000 complete [ 5171.110665] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5171.114105] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5172.363873] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5175.539400] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5176.380494] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5179.541018] LustreError: 8781:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x53:0x0]: have 382 want 1000 [ 5180.059994] LustreError: 6925:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x57:0x0]: have 382 want 1000 [ 5180.063527] LustreError: 6925:0:(lod_qos.c:1401:lod_ost_alloc_specific()) Skipped 3 previous similar messages [ 5181.087194] LustreError: 8781:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x60:0x0]: have 382 want 1000 [ 5181.091213] LustreError: 8781:0:(lod_qos.c:1401:lod_ost_alloc_specific()) Skipped 8 previous similar messages [ 5248.806545] Lustre: DEBUG MARKER: == recovery-small test 144c: reconnection during orphan cleanup shouldn't lose LAST_ID synchronization ========================================================== 04:49:24 (1713430164) [ 5249.967397] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x280000401 to 0x2800013a0 [ 5267.045335] Lustre: Failing over lustre-MDT0000 [ 5267.282509] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5267.287114] LustreError: Skipped 2 previous similar messages [ 5267.292049] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5267.631979] Lustre: server umount lustre-MDT0000 complete [ 5270.016703] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5270.996879] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5271.983167] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5275.187915] LustreError: 17874:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout id 254 sleeping for 5000ms [ 5275.191556] LustreError: 17874:0:(ofd_dev.c:1523:ofd_create_hdl()) Skipped 15 previous similar messages [ 5278.242167] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 5278.244224] Lustre: Skipped 4 previous similar messages [ 5278.487967] LustreError: 3292:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout interrupted [ 5278.490416] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:65330 to 0x2c0000401:65536) [ 5278.493313] LustreError: 17874:0:(ofd_dev.c:1528:ofd_create_hdl()) lustre-OST0000: dropping old orphan cleanup request [ 5278.496704] LustreError: 11145:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -116 [ 5278.551114] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x2c0000401 to 0x2c0000403 [ 5279.500435] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8827 to 0x2800013a0:8897) [ 5293.772228] Lustre: DEBUG MARKER: == recovery-small test 145: connect mdtlovs and process update logs after recovery expire ========================================================== 04:50:09 (1713430209) [ 5294.125355] Lustre: DEBUG MARKER: SKIP: recovery-small test_145 needs >= 3 MDTs [ 5296.023504] Lustre: DEBUG MARKER: == recovery-small test 146: test eviction is counted properly ========================================================== 04:50:11 (1713430211) [ 5296.486545] Lustre: 13219:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting e6fd5171-41ae-4b04-b9f2-e9b4e9bbc5d7 at adminstrative request [ 5299.707550] Lustre: DEBUG MARKER: == recovery-small test 147: Check client reconnect ======= 04:50:15 (1713430215) [ 5300.231521] Lustre: *** cfs_fail_loc=225, val=0*** [ 5390.245007] Lustre: *** cfs_fail_loc=225, val=0*** [ 5390.246724] Lustre: Skipped 3 previous similar messages [ 5453.298355] Lustre: lustre-OST0000: haven't heard from client e6fd5171-41ae-4b04-b9f2-e9b4e9bbc5d7 (at 192.168.201.55@tcp) in 153 seconds. I think it's dead, and I am evicting it. exp ffff8800851e1000, cur 1713430369 expire 1713430339 last 1713430216 [ 5453.306257] Lustre: Skipped 1 previous similar message [ 5467.118102] Lustre: DEBUG MARKER: == recovery-small test 148: data corruption through resend ========================================================== 04:53:02 (1713430382) [ 5480.276595] Lustre: MGS: haven't heard from client 8542f199-e7b3-4207-a0ab-1cea53700222 (at 0@lo) in 35 seconds. I think it's dead, and I am evicting it. exp ffff88012fb6a000, cur 1713430396 expire 1713430366 last 1713430361 [ 5494.659057] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5494.659113] LustreError: 166-1: MGC192.168.201.155@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5494.659115] LustreError: Skipped 8 previous similar messages [ 5494.660884] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.201.155@tcp (at 0@lo) [ 5494.660885] Lustre: Skipped 39 previous similar messages [ 5494.661009] Lustre: Evicted from MGS (at 192.168.201.155@tcp) after server handle changed from 0xf24e407168c43f14 to 0xf24e407168c44bee [ 5494.675923] Lustre: Skipped 39 previous similar messages [ 5495.819994] LustreError: 9200:0:(tgt_handler.c:2880:tgt_brw_write()) cfs_fail_timeout id 227 awake [ 5495.822335] LustreError: 9200:0:(tgt_handler.c:2880:tgt_brw_write()) Skipped 13 previous similar messages [ 5501.576387] Lustre: DEBUG MARKER: == recovery-small test 149: skip orphan removal at umount ========================================================== 04:53:37 (1713430417) [ 5504.675600] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 5504.677149] Lustre: Skipped 2 previous similar messages [ 5508.492731] Lustre: server umount lustre-MDT0001 complete [ 5509.682587] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5509.688143] LustreError: Skipped 72 previous similar messages [ 5513.362552] Lustre: server umount lustre-MDT0000 complete [ 5515.450700] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5515.592942] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5515.595799] Lustre: Skipped 10 previous similar messages [ 5515.611239] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:33) [ 5515.611247] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8901 to 0x2800013a0:8929) [ 5516.389912] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5518.569265] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5518.672956] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:1254 to 0x280000bd0:1601) [ 5518.673119] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:5394 to 0x2c0000402:5505) [ 5519.343571] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5538.555428] Lustre: DEBUG MARKER: == recovery-small test 150: statfs when MDT0 offline with lazystatfs option ========================================================== 04:54:14 (1713430454) [ 5539.086147] Lustre: Failing over lustre-MDT0000 [ 5539.154298] Lustre: server umount lustre-MDT0000 complete [ 5542.678382] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5542.815749] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5542.818178] Lustre: Skipped 12 previous similar messages [ 5543.651078] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5544.729224] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5545.365174] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5545.369573] Lustre: Skipped 9 previous similar messages [ 5547.823971] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 5547.826046] Lustre: Skipped 9 previous similar messages [ 5547.839469] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:65) [ 5547.839499] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8901 to 0x2800013a0:8961) [ 5553.126819] Lustre: DEBUG MARKER: == recovery-small test 152: QoS object allocation could be awakened in case of OST failover ========================================================== 04:54:28 (1713430468) [ 5554.077212] Lustre: DEBUG MARKER: SKIP: recovery-small test_152 MDS Linux kernel does not support killable semaphore [ 5556.168569] Lustre: DEBUG MARKER: == recovery-small test 153: evict vs reconnect race ====== 04:54:31 (1713430471) [ 5557.826568] Lustre: *** cfs_fail_loc=174, val=0*** [ 5557.828063] Lustre: Skipped 12 previous similar messages [ 5573.898110] Lustre: 3486:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713430474/real 1713430474] req@ffff8800939ef800 x1796656302047552/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 0 to 1 dl 1713430490 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5574.732008] Lustre: lustre-MDT0000: Received new LWP connection from 0@lo, keep former export from same NID [ 5574.732070] Lustre: *** cfs_fail_loc=174, val=0*** [ 5574.732072] Lustre: Skipped 2 previous similar messages [ 5574.739107] Lustre: Skipped 1 previous similar message [ 5578.906039] Lustre: 3485:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713430479/real 1713430479] req@ffff880095856300 x1796656302049088/t0(0) o400->lustre-MDT0000-lwp-MDT0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713430495 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5578.917262] Lustre: 3485:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 5579.904963] Lustre: Failing over lustre-MDT0000 [ 5579.986623] Lustre: server umount lustre-MDT0000 complete [ 5582.539143] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5583.478287] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5584.658458] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5587.688444] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8901 to 0x2800013a0:8993) [ 5587.688469] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:97) [ 5588.673004] Lustre: 3483:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713430489/real 1713430489] req@ffff88008f353100 x1796656302051200/t0(0) o400->lustre-MDT0000-lwp-MDT0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713430505 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5588.679711] Lustre: 3483:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 5592.840643] Lustre: DEBUG MARKER: == recovery-small test 154a: corruption update llog can be skipped ========================================================== 04:55:08 (1713430508) [ 5593.331227] Lustre: Failing over lustre-MDT0001 [ 5593.398178] Lustre: server umount lustre-MDT0001 complete [ 5595.090543] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) [ 5597.553635] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5598.427842] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5599.433953] Lustre: Failing over lustre-MDT0000 [ 5599.495461] Lustre: server umount lustre-MDT0000 complete [ 5601.700851] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5601.780858] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_connect to node 0@lo failed: rc = -114 [ 5602.575765] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5603.623289] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 20 [ 5606.810968] LustreError: 26154:0:(llog_osd.c:268:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: bad log [0x240000409:0x1:0x0] header magic: 0xd32eb301 (expected 0x10645539) [ 5606.814214] Lustre: 26154:0:(lod_sub_object.c:981:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: renew invalid update log [0x240000409:0x1:0x0]: rc = -22 [ 5606.822186] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:5394 to 0x2c0000402:5537) [ 5606.822703] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:1254 to 0x280000bd0:1633) [ 5606.843567] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8901 to 0x2800013a0:9025) [ 5606.844048] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:129) [ 5611.945024] Lustre: DEBUG MARKER: == recovery-small test 154b: restore update llog after failed recovery ========================================================== 04:55:27 (1713430527) [ 5612.409879] Lustre: Failing over lustre-MDT0000 [ 5612.468467] Lustre: server umount lustre-MDT0000 complete [ 5614.838659] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5614.956598] Lustre: lustre-MDT0000: Aborting client recovery [ 5614.959120] LustreError: 28212:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 5614.962684] Lustre: 28242:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5614.965061] Lustre: 28242:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 5619.948994] LustreError: 28241:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 5, retries 0, failed: rc = -5 [ 5619.952543] Lustre: 28242:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client e6fd5171-41ae-4b04-b9f2-e9b4e9bbc5d7@ [ 5619.955793] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 5619.958009] Lustre: 28242:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5619.961614] Lustre: lustre-MDT0000-osd: cancel update llog [0x200009870:0x1:0x0] [ 5619.986351] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8901 to 0x2800013a0:9057) [ 5619.988061] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:161) [ 5620.765785] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5621.759656] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 30 [ 5627.627129] Lustre: DEBUG MARKER: == recovery-small test 155: failover after client remount ========================================================== 04:55:43 (1713430543) [ 5630.077307] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5630.605784] Lustre: Failing over lustre-MDT0000 [ 5630.670744] Lustre: server umount lustre-MDT0000 complete [ 5643.768213] LDISKFS-fs (dm-0): recovery complete [ 5643.770188] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5644.737683] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5648.356957] Lustre: lustre-MDT0000: Denying connection for new client 36ab28b5-1505-4b60-b4c1-ab4776a677a6 (at 192.168.201.55@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 5648.918576] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:193) [ 5648.922459] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:9059 to 0x2800013a0:9089) [ 5653.238914] Lustre: DEBUG MARKER: == recovery-small test 156: tot_granted miscount after client eviction ========================================================== 04:56:08 (1713430568) [ 5653.709513] Lustre: Setting parameter general.timeout in log params [ 5655.963462] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 5656.741833] Lustre: Failing over lustre-OST0000 [ 5656.921860] Lustre: server umount lustre-OST0000 complete [ 5669.982728] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5670.126653] LDISKFS-fs (dm-2): recovery complete [ 5670.128226] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5671.442276] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing set_default_debug -1 all [ 5709.531021] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 5709.533541] Lustre: 1326:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client 36ab28b5-1505-4b60-b4c1-ab4776a677a6@192.168.201.55@tcp [ 5709.538037] Lustre: 1326:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 5709.540818] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 5709.543029] Lustre: 1326:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-OST0000: extended recovery timer reached hard limit: 45, extend: 1 [ 5709.547797] Lustre: 1326:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 5709.550224] LustreError: dumping log to /tmp/lustre-log.1713430625.1326 [ 5715.926991] Lustre: DEBUG MARKER: oleg155-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5716.301110] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5719.541968] Lustre: Modifying parameter general.timeout in log params [ 5721.535570] Lustre: DEBUG MARKER: == recovery-small test 157: eviction during mmaped i/o === 04:57:17 (1713430637) [ 5722.840331] Lustre: 2907:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 36ab28b5-1505-4b60-b4c1-ab4776a677a6 at adminstrative request [ 5722.846378] Lustre: 2907:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 5726.749869] Lustre: DEBUG MARKER: == recovery-small test complete, duration 5630 sec ======= 04:57:22 (1713430642) [ 5781.379236] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5781.381620] Lustre: Skipped 7 previous similar messages [ 5783.152613] Lustre: server umount lustre-MDT0000 complete [ 5785.384615] LustreError: 9211:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713430701 with bad export cookie 17459963661195178200 [ 5785.389739] LustreError: 9211:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 3 previous similar messages [ 5785.500300] Lustre: server umount lustre-MDT0001 complete [ 5797.763405] Lustre: server umount lustre-OST0000 complete [ 5810.008395] Lustre: server umount lustre-OST0001 complete [ 5811.640155] device-mapper: core: cleaned up [ 5813.977728] Lustre: DEBUG MARKER: oleg155-server.virtnet: executing unload_modules_local [ 5814.454835] Key type lgssc unregistered [ 5814.519277] LNet: 6169:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5814.522264] LNet: Removed LNI 192.168.201.155@tcp