[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f5b30-0x000f5b3f] mapped at [ffffffffff200b30] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5950 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1bb7 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1a53 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01A13 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1ac7 00090 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1b57 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1b8f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 313772450 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.423729] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.427650] pid_max: default: 32768 minimum: 301 [ 0.429021] Security Framework initialized [ 0.431149] SELinux: Initializing. [ 0.434299] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.440083] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.443362] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.445641] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.448759] Initializing cgroup subsys memory [ 0.450126] Initializing cgroup subsys devices [ 0.451977] Initializing cgroup subsys freezer [ 0.453285] Initializing cgroup subsys net_cls [ 0.455228] Initializing cgroup subsys blkio [ 0.456446] Initializing cgroup subsys perf_event [ 0.458416] Initializing cgroup subsys hugetlb [ 0.460701] Initializing cgroup subsys pids [ 0.464233] Initializing cgroup subsys net_prio [ 0.466264] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.470721] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.472817] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.474714] tlb_flushall_shift: 6 [ 0.476046] FEATURE SPEC_CTRL Present [ 0.477607] FEATURE IBPB_SUPPORT Present [ 0.479026] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.481065] Spectre V2 : Vulnerable [ 0.482149] Speculative Store Bypass: Vulnerable [ 0.484704] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.492470] ACPI: Core revision 20130517 [ 0.495541] ACPI: All ACPI Tables successfully acquired [ 0.497371] ftrace: allocating 30294 entries in 119 pages [ 0.554725] Enabling x2apic [ 0.555477] Enabled x2apic [ 0.556577] Switched APIC routing to physical x2apic. [ 0.560008] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.562003] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.565400] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.568111] ... version: 2 [ 0.569375] ... bit width: 48 [ 0.570621] ... generic registers: 4 [ 0.571844] ... value mask: 0000ffffffffffff [ 0.573483] ... max period: 00007fffffffffff [ 0.575025] ... fixed-purpose events: 3 [ 0.576268] ... event mask: 000000070000000f [ 0.577886] KVM setup paravirtual spinlock [ 0.581055] smpboot: Booting Node 0, Processors #1[ 0.582423] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.585393] KVM setup async PF for cpu 1 [ 0.586359] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.589449] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.592704] KVM setup async PF for cpu 2 [ 0.593480] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.597337] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.603135] Brought up 4 CPUs [ 0.603216] KVM setup async PF for cpu 3 [ 0.603224] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.607205] smpboot: Max logical packages: 1 [ 0.608571] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.611675] devtmpfs: initialized [ 0.613057] x86/mm: Memory block size: 128MB [ 0.617894] EVM: security.selinux [ 0.619028] EVM: security.ima [ 0.619785] EVM: security.capability [ 0.623138] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.625203] NET: Registered protocol family 16 [ 0.626443] cpuidle: using governor haltpoll [ 0.628163] ACPI: bus type PCI registered [ 0.629355] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.631143] PCI: Using configuration type 1 for base access [ 0.632437] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.643101] ACPI: Added _OSI(Module Device) [ 0.644587] ACPI: Added _OSI(Processor Device) [ 0.646313] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.647861] ACPI: Added _OSI(Processor Aggregator Device) [ 0.649836] ACPI: Added _OSI(Linux-Dell-Video) [ 0.656042] ACPI: Interpreter enabled [ 0.657605] ACPI: (supports S0 S3 S4 S5) [ 0.659414] ACPI: Using IOAPIC for interrupt routing [ 0.661496] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.665084] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.673400] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.675579] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.678265] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.680812] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.685940] acpiphp: Slot [2] registered [ 0.687329] acpiphp: Slot [3] registered [ 0.688631] acpiphp: Slot [4] registered [ 0.689749] acpiphp: Slot [5] registered [ 0.691075] acpiphp: Slot [6] registered [ 0.692425] acpiphp: Slot [7] registered [ 0.693841] acpiphp: Slot [8] registered [ 0.695213] acpiphp: Slot [9] registered [ 0.697134] acpiphp: Slot [10] registered [ 0.698412] acpiphp: Slot [11] registered [ 0.699493] acpiphp: Slot [12] registered [ 0.700541] acpiphp: Slot [13] registered [ 0.701608] acpiphp: Slot [14] registered [ 0.702620] acpiphp: Slot [15] registered [ 0.703886] acpiphp: Slot [16] registered [ 0.705305] acpiphp: Slot [17] registered [ 0.706729] acpiphp: Slot [18] registered [ 0.708488] acpiphp: Slot [19] registered [ 0.709866] acpiphp: Slot [20] registered [ 0.711401] acpiphp: Slot [21] registered [ 0.712920] acpiphp: Slot [22] registered [ 0.714043] acpiphp: Slot [23] registered [ 0.714776] acpiphp: Slot [24] registered [ 0.715829] acpiphp: Slot [25] registered [ 0.717250] acpiphp: Slot [26] registered [ 0.718470] acpiphp: Slot [27] registered [ 0.719732] acpiphp: Slot [28] registered [ 0.721232] acpiphp: Slot [29] registered [ 0.722529] acpiphp: Slot [30] registered [ 0.723845] acpiphp: Slot [31] registered [ 0.725180] PCI host bridge to bus 0000:00 [ 0.726652] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.728988] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.731835] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.733976] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.736072] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window] [ 0.737955] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.750935] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.753066] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.755037] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.757890] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.761475] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.763492] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.986680] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.990767] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.993032] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.995346] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.997547] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 1.000842] vgaarb: loaded [ 1.002037] SCSI subsystem initialized [ 1.003266] ACPI: bus type USB registered [ 1.004608] usbcore: registered new interface driver usbfs [ 1.006375] usbcore: registered new interface driver hub [ 1.012227] usbcore: registered new device driver usb [ 1.019319] PCI: Using ACPI for IRQ routing [ 1.023252] NetLabel: Initializing [ 1.024088] NetLabel: domain hash size = 128 [ 1.025830] NetLabel: protocols = UNLABELED CIPSOv4 [ 1.027668] NetLabel: unlabeled traffic allowed by default [ 1.030116] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 1.031804] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 1.036796] amd_nb: Cannot enumerate AMD northbridges [ 1.038587] Switched to clocksource kvm-clock [ 1.058478] pnp: PnP ACPI init [ 1.059666] ACPI: bus type PNP registered [ 1.062112] pnp: PnP ACPI: found 6 devices [ 1.063283] ACPI: bus type PNP unregistered [ 1.077814] NET: Registered protocol family 2 [ 1.079483] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.081805] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.084440] TCP: Hash tables configured (established 32768 bind 32768) [ 1.086320] TCP: reno registered [ 1.087265] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.089164] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.091313] NET: Registered protocol family 1 [ 1.094059] RPC: Registered named UNIX socket transport module. [ 1.095647] RPC: Registered udp transport module. [ 1.096790] RPC: Registered tcp transport module. [ 1.098082] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.099774] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.104647] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.107072] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.109553] Unpacking initramfs... [ 2.626709] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 2.630316] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.632062] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 2.634886] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.636994] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.638554] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.640087] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.645437] cryptomgr_test (52) used greatest stack depth: 14480 bytes left [ 2.649079] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.653783] Initialise system trusted keyring [ 2.690622] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.692477] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.700234] zpool: loaded [ 2.701349] zbud: loaded [ 2.702667] VFS: Disk quotas dquot_6.6.0 [ 2.704398] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.711822] NFS: Registering the id_resolver key type [ 2.713286] Key type id_resolver registered [ 2.714716] Key type id_legacy registered [ 2.716168] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.718706] Key type big_key registered [ 2.727856] cryptomgr_test (58) used greatest stack depth: 14048 bytes left [ 2.732010] cryptomgr_test (63) used greatest stack depth: 13984 bytes left [ 2.733452] NET: Registered protocol family 38 [ 2.733463] Key type asymmetric registered [ 2.733466] Asymmetric key parser 'x509' registered [ 2.733607] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.733811] io scheduler noop registered [ 2.733816] io scheduler deadline registered (default) [ 2.733882] io scheduler cfq registered [ 2.733887] io scheduler mq-deadline registered [ 2.733891] io scheduler kyber registered [ 2.736066] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.736074] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.753814] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.755932] ACPI: Power Button [PWRF] [ 2.757536] GHES: HEST is not enabled! [ 2.816768] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.875979] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 3.001454] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 3.066036] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 3.191813] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 3.218910] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 3.249130] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 3.252815] Non-volatile memory driver v1.3 [ 3.254306] Linux agpgart interface v0.103 [ 3.256073] crash memory driver: version 1.1 [ 3.258028] nbd: registered device at major 43 [ 3.272000] virtio_blk virtio1: [vda] 67352 512-byte logical blocks (34.4 MB/32.8 MiB) [ 3.290453] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 3.305503] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 3.317706] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 3.331071] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.343829] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 3.350075] rdac: device handler registered [ 3.352393] hp_sw: device handler registered [ 3.353909] emc: device handler registered [ 3.355523] libphy: Fixed MDIO Bus: probed [ 3.361082] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.362981] ehci-pci: EHCI PCI platform driver [ 3.364262] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.366536] ohci-pci: OHCI PCI platform driver [ 3.368658] uhci_hcd: USB Universal Host Controller Interface driver [ 3.372805] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 3.377312] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 3.379038] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 3.381416] mousedev: PS/2 mouse device common for all mice [ 3.384036] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 3.387005] rtc_cmos 00:05: RTC can wake from S4 [ 3.388417] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 3.389125] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 3.396888] hidraw: raw HID events driver (C) Jiri Kosina [ 3.400472] usbcore: registered new interface driver usbhid [ 3.403199] usbhid: USB HID core driver [ 3.404436] drop_monitor: Initializing network drop monitor service [ 3.407018] Netfilter messages via NETLINK v0.30. [ 3.408812] TCP: cubic registered [ 3.409639] Initializing XFRM netlink socket [ 3.411653] NET: Registered protocol family 10 [ 3.414183] NET: Registered protocol family 17 [ 3.415861] Key type dns_resolver registered [ 3.418479] mce: Using 10 MCE banks [ 3.420085] Loading compiled-in X.509 certificates [ 3.422736] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 3.425989] registered taskstats version 1 [ 3.429990] modprobe (72) used greatest stack depth: 13456 bytes left [ 3.435306] Key type trusted registered [ 3.441101] Key type encrypted registered [ 3.442385] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 3.446266] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 3.451354] rtc_cmos 00:05: setting system clock to 2024-04-16 19:39:13 UTC (1713296353) [ 3.456227] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 3.458939] Write protecting the kernel read-only data: 12288k [ 3.461298] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 3.463793] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 3.473927] random: systemd: uninitialized urandom read (16 bytes read) [ 3.477537] random: systemd: uninitialized urandom read (16 bytes read) [ 3.481335] random: systemd: uninitialized urandom read (16 bytes read) [ 3.485897] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 3.492523] systemd[1]: Detected virtualization kvm. [ 3.494047] systemd[1]: Detected architecture x86-64. [ 3.495773] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 3.501809] systemd[1]: No hostname configured. [ 3.503830] systemd[1]: Set hostname to . [ 3.505695] random: systemd: uninitialized urandom read (16 bytes read) [ 3.508233] systemd[1]: Initializing machine ID from random generator. [ 3.576320] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 3.579590] random: systemd: uninitialized urandom read (16 bytes read) [ 3.581465] random: systemd: uninitialized urandom read (16 bytes read) [ 3.583699] random: systemd: uninitialized urandom read (16 bytes read) [ 3.585857] random: systemd: uninitialized urandom read (16 bytes read) [ 3.590192] random: systemd: uninitialized urandom read (16 bytes read) [ 3.592543] random: systemd: uninitialized urandom read (16 bytes read) [ 3.604023] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 3.610509] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 3.615286] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 3.619713] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 3.625131] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 3.628758] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 3.633708] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 3.636865] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 3.641511] systemd[1]: Created slice System Slice. [ 3.642702] tsc: Refined TSC clocksource calibration: 2399.989 MHz [ OK ] Created slice System Slice. [ 3.650506] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 3.660912] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 3.666275] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 3.669717] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 3.674778] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 3.680497] systemd[1]: Starting Journal Service... Starting Journal Service... [ 3.685226] systemd[1]: Started Create list of required static device nodes for the current kernel. [ 3.691621] hrtimer: interrupt took 3762563 ns [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 3.713827] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 3.721518] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 3.731890] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables... [ 3.744986] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev... [ 3.753539] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. [ OK ] Started Apply Kernel Variables. [ OK ] Started Create Static Device Nodes in /dev. [ 3.946370] random: fast init done [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... [ 4.234609] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mounting Configuration File System... [ OK ] Mounted Configuration File System. [ OK ] Started udev Coldplug all Devices. Starting dracut initqueue hook... [ OK ] Reached target System Initialization. Starting Show Plymouth Boot Screen... [ 4.323668] scsi host0: ata_piix [ 4.327791] scsi host1: ata_piix [ 4.331864] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 4.333513] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Basic System. %G[ 4.540747] ip (320) used greatest stack depth: 13080 bytes left [ 4.601109] ip (343) used greatest stack depth: 12336 bytes left [ 6.271793] dracut-initqueue[275]: RTNETLINK answers: File exists [ 7.010890] dracut-initqueue[275]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Mounting /sysroot... [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... [ 7.714412] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... Starting Plymouth switch root service... [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped target Timers. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target System Initialization. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Stopped target Local File Systems. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Sockets. [ OK ] Stopped target Slices. [ OK ] Stopped target Paths. [ OK ] Stopped target Swap. [ OK ] Stopped udev Kernel Device Manager. [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Control Socket. [ OK ] Closed udev Kernel Socket. Starting Cleanup udevd DB... [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. [ OK ] Started Plymouth switch root service. Starting Switch Root... [ 8.225433] systemd-journald[108]: Received SIGTERM from PID 1 (systemd). [ 8.518217] SELinux: Disabled at runtime. [ 8.614350] ip_tables: (C) 2000-2006 Netfilter Core Team [ 8.620912] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... Starting Create list of required st... nodes for the current kernel... Mounting Debug File System... [ OK ] Listening on udev Kernel Socket. [ OK ] Reached target rpc_pipefs.target. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. [ OK ] Listening on udev Control Socket. Starting udev Coldplug all Devices... Mounting Huge Pages File System... Starting Set Up Additional Binary Formats... Starting Remount Root and Kernel File Systems... [ OK ] Stopped target Switch Root. Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Created slice User and Session Slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Created slice system-serial\x2dgetty.slice. Mounting POSIX Message Queue File System... [ OK ] Reached target Local Encrypted Volumes. [ OK ] Reached target Slices. [ OK ] Stopped target Initrd Root File System. [ OK ] Created slice system-getty.slice. Starting Load Kernel Modules... [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Stopped target Initrd File Systems. [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. [ OK ] Started Create list of required sta...ce nodes for the current kernel. Mounting Arbitrary Executable File Formats File System... Starting Create Static Device Nodes in /dev... [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Started Load Kernel Modules. [ OK ] Started Journal Service. Starting Apply Kernel Variables... [ OK ] Started udev Coldplug all Devices. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Configure read-only root support... Starting Flush Journal to Persistent Storage... [ OK ] Started Set Up Additional Binary Formats. [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... [ OK ] Mounted /mnt. [ 9.223459] systemd-journald[565]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 9.489018] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ 9.523229] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ 9.599606] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/vda. Mounting /home/green/git/lustre-release... [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ 9.656535] AVX version of gcm_enc/dec engaged. [ 9.658639] AES CTR mode by8 optimization enabled [ 9.668329] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 9.685470] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ OK ] Mounted /home/green/git/lustre-release. [ 9.695059] find (642) used greatest stack depth: 11824 bytes left [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 9.723693] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 9.729817] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) %G[ 9.934214] EDAC MC: Ver: 3.0.0 [ 9.945717] EDAC sbridge: Ver: 1.1.2 [ 13.202940] mount.nfs (771) used greatest stack depth: 10544 bytes left [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Preprocess NFS configuration... Starting Mark the need to relabel after reboot... Starting Tell Plymouth To Write Out Runtime Data... Starting Create Volatile Files and Directories... Starting Rebuild Journal Catalog... Starting Load/Save Random Seed... [ OK ] Started Preprocess NFS configuration. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Tell Plymouth To Write Out Runtime Data. [ OK ] Started Load/Save Random Seed. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. Starting Update UTMP about System Boot/Shutdown... [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. Starting Update is Completed... [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting Dump dmesg to /var/log/dmesg... [ OK ] Started D-Bus System Message Bus. Starting Login Service... Starting GSSAPI Proxy Daemon... Starting Network Manager... [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. Starting Network Manager Wait Online... [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Wait for Plymouth Boot Screen to Quit... Starting Terminate Plymouth Boot Screen... Starting Network Manager Script Dispatcher Service... CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg354-server login: [ 23.494056] device-mapper: uevent: version 1.0.3 [ 23.495877] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel@redhat.com [ 27.945041] libcfs: loading out-of-tree module taints kernel. [ 27.946750] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 27.975015] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_hostid [ 32.681898] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing load_modules_local [ 32.889059] alg: No test for adler32 (adler32-zlib) [ 33.639649] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 33.774699] Lustre: Lustre: Build Version: 2.15.62_23_gf1c145f [ 33.934182] LNet: Added LNI 192.168.203.154@tcp [8/256/0/180] [ 33.935530] LNet: Accept secure, port 988 [ 35.475777] Key type lgssc registered [ 35.773495] Lustre: Echo OBD driver; http://www.lustre.org/ [ 38.707229] icp: module license 'CDDL' taints kernel. [ 38.709226] Disabling lock debugging due to kernel taint [ 41.258942] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 44.228460] LDISKFS-fs (vdc): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 48.729371] LDISKFS-fs (vdd): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 50.901346] LDISKFS-fs (vde): file extents enabled, maximum tree depth=5 [ 50.904577] LDISKFS-fs (vde): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 53.064556] LDISKFS-fs (vdf): file extents enabled, maximum tree depth=5 [ 53.070470] LDISKFS-fs (vdf): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 56.086828] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing load_modules_local [ 59.232681] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 59.251944] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 59.259614] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 60.330533] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 60.338364] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 60.372728] Lustre: lustre-MDT0000: new disk, initializing [ 60.390660] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 60.395839] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 60.420277] mount.lustre (6910) used greatest stack depth: 10144 bytes left [ 61.147175] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 65.159644] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 65.184141] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 65.203302] Lustre: Setting parameter lustre-MDT0001.mdt.identity_upcall in log lustre-MDT0001 [ 65.210233] Lustre: srv-lustre-MDT0001: No data found on store. Initialize space. [ 65.213009] Lustre: Skipped 1 previous similar message [ 65.243909] Lustre: lustre-MDT0001: new disk, initializing [ 65.255228] Lustre: lustre-MDT0001: Imperative Recovery not enabled, recovery window 60-180 [ 65.260469] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:1:mdt [ 65.263482] Lustre: cli-ctl-lustre-MDT0001: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:1:mdt] [ 65.975645] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 69.941802] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 69.945415] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 69.966703] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 69.970772] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 70.040967] Lustre: lustre-OST0000: new disk, initializing [ 70.042504] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 70.054630] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 71.225640] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 72.338067] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:0:ost [ 72.341675] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:0:ost] [ 72.351001] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x280000401 [ 75.314085] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 75.318929] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: errors=remount-ro [ 75.342168] LDISKFS-fs (dm-3): file extents enabled, maximum tree depth=5 [ 75.346901] LDISKFS-fs (dm-3): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 75.372965] Lustre: lustre-OST0001: new disk, initializing [ 75.374631] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 75.385319] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 76.729239] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 77.642652] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x00000002c0000400-0x0000000300000400]:1:ost [ 77.646476] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x00000002c0000400-0x0000000300000400]:1:ost] [ 77.656029] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x2c0000401 [ 81.726410] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 88.474200] random: crng init done [ 89.034310] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 94.785988] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing check_logdir /tmp/testlogs/ [ 95.674372] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing yml_node [ 96.779272] Lustre: DEBUG MARKER: Client: 2.15.62.23 [ 97.476695] Lustre: DEBUG MARKER: MDS: 2.15.62.23 [ 98.873198] Lustre: DEBUG MARKER: OSS: 2.15.62.23 [ 100.000897] Lustre: DEBUG MARKER: -----============= acceptance-small: recovery-small ============----- Tue Apr 16 15:40:49 EDT 2024 [ 102.879561] Lustre: DEBUG MARKER: excepting tests: 136 [ 103.536077] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing check_config_client /mnt/lustre [ 108.292448] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 109.136662] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 109.744162] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 111.608741] Lustre: DEBUG MARKER: == recovery-small test 1: create, chmod, stat: drop req, drop rep ========================================================== 15:41:00 (1713296460) [ 111.881381] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 127.898047] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 128.393389] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 128.395293] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a638d180 x1796521462864000/t4294967300(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:659/0 lens 520/448 e 0 to 0 dl 1713296489 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 144.408293] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 144.415511] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a0c90700 x1796521462864000/t4294967300(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:675/0 lens 520/2880 e 0 to 0 dl 1713296505 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 144.883725] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 160.896499] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 161.341527] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 161.342968] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012bcac700 x1796521462866304/t4294967302(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:692/0 lens 488/456 e 0 to 0 dl 1713296522 ref 1 fl Interpret:/200/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 177.355561] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 177.364466] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880079947480 x1796521462866304/t4294967302(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:708/0 lens 488/3152 e 0 to 0 dl 1713296538 ref 1 fl Interpret:/202/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 177.878849] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 193.892298] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 194.372526] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 194.374114] LustreError: 8082:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8801307fe300 x1796521462868096/t0(0) o34->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:725/0 lens 472/464 e 0 to 0 dl 1713296555 ref 1 fl Interpret:/200/0 rc 0/0 job:'statone.0' uid:0 gid:0 [ 210.384447] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 213.550802] Lustre: DEBUG MARKER: == recovery-small test 4: open: drop req, drop rep ======= 15:42:42 (1713296562) [ 213.825061] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 229.837080] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 230.329760] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 230.331496] LustreError: 6935:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a0a19180 x1796521462871104/t4294967308(0) o35->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:6/0 lens 392/456 e 0 to 0 dl 1713296591 ref 1 fl Interpret:/200/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 246.331823] Lustre: 6935:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012d20b800 x1796521462871104/t4294967308(0) o35->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:22/0 lens 392/456 e 0 to 0 dl 1713296607 ref 1 fl Interpret:/202/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 249.297619] Lustre: DEBUG MARKER: == recovery-small test 5: rename: drop req, drop rep ===== 15:43:18 (1713296598) [ 249.558521] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 265.578146] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 265.581252] Lustre: Skipped 1 previous similar message [ 266.096163] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 266.098444] LustreError: 6946:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a09e8000 x1796521462874560/t4294967312(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:42/0 lens 552/456 e 0 to 0 dl 1713296627 ref 1 fl Interpret:/200/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 282.098031] Lustre: 6946:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a0b72300 x1796521462874560/t4294967312(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:58/0 lens 552/2888 e 0 to 0 dl 1713296643 ref 1 fl Interpret:/202/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 285.319885] Lustre: DEBUG MARKER: == recovery-small test 6: link, unlink: drop req, drop rep ========================================================== 15:43:54 (1713296634) [ 285.595618] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 302.097730] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 302.099559] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a1084380 x1796521462878400/t4294967317(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:78/0 lens 512/440 e 0 to 0 dl 1713296663 ref 1 fl Interpret:/200/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 318.098699] Lustre: 8082:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a0a19880 x1796521462878400/t4294967317(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:94/0 lens 512/440 e 0 to 0 dl 1713296679 ref 1 fl Interpret:/202/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 318.607903] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 334.624715] Lustre: lustre-MDT0000: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 334.627247] Lustre: Skipped 3 previous similar messages [ 335.091889] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 335.093389] LustreError: 9223:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a04e0a80 x1796521462881280/t4294967319(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:111/0 lens 504/456 e 0 to 0 dl 1713296696 ref 1 fl Interpret:/200/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 351.093003] Lustre: 16976:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012de6b480 x1796521462881280/t4294967319(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:127/0 lens 504/2888 e 0 to 0 dl 1713296712 ref 1 fl Interpret:/202/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 354.013300] Lustre: DEBUG MARKER: == recovery-small test 8: touch: drop rep (bug 1423) ===== 15:45:03 (1713296703) [ 370.234889] Lustre: 16976:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880130cb8700 x1796521462882944/t4294967322(0) o36->13e59011-7e27-4021-b556-e51a9feb2d07@192.168.203.54@tcp:146/0 lens 488/3152 e 0 to 0 dl 1713296731 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 373.237140] Lustre: DEBUG MARKER: == recovery-small test 9: pause bulk on OST (bug 1420) === 15:45:22 (1713296722) [ 373.733330] LustreError: 18988:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 5000ms [ 378.735676] LustreError: 18988:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 381.904321] Lustre: DEBUG MARKER: == recovery-small test 10a: finish request on server after client eviction (bug 1521) ========================================================== 15:45:31 (1713296731) [ 397.964653] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296732/real 1713296732] req@ffff880072e51f80 x1796521468192512/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296748 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 400.271695] Lustre: 9201:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296734/real 1713296734] req@ffff880072d94e00 x1796521468193728/t0(0) o104->lustre-OST0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296750 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 400.271702] Lustre: 11427:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296734/real 1713296734] req@ffff8800729cd880 x1796521468193792/t0(0) o104->lustre-OST0001@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296750 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 400.271708] Lustre: 11427:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 413.970700] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296748/real 1713296748] req@ffff880072e51f80 x1796521468192512/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296764 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 416.271683] Lustre: 11427:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296750/real 1713296750] req@ffff8800729cd880 x1796521468193792/t0(0) o104->lustre-OST0001@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296766 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 416.280731] Lustre: 11427:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 422.030673] Lustre: mdt00_001: service thread pid 6932 was inactive for 40.066 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 422.035672] Pid: 6932, comm: mdt00_001 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 422.037471] Call Trace: [ 422.038156] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 422.039334] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 422.040611] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 422.041848] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 422.043314] [<0>] ldlm_cli_enqueue_local+0x1ec/0x880 [ptlrpc] [ 422.044899] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 422.046695] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 422.048074] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 422.049254] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 422.050436] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 422.051483] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 422.052475] [<0>] mdt_reint+0x67/0x150 [mdt] [ 422.053655] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 422.055325] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 422.057496] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 422.058677] [<0>] kthread+0xe4/0xf0 [ 422.059432] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 422.060331] [<0>] 0xfffffffffffffffe [ 424.334658] Lustre: ll_ost00_000: service thread pid 9199 was inactive for 40.062 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 424.334662] Lustre: ll_ost00_004: service thread pid 11427 was inactive for 40.062 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 424.334667] Lustre: Skipped 1 previous similar message [ 424.344436] Pid: 9199, comm: ll_ost00_000 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 424.346107] Call Trace: [ 424.346576] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 424.347765] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 424.348802] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 424.349953] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 424.351234] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 424.352602] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 424.353772] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 424.354973] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 424.356283] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 424.357712] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 424.358862] [<0>] kthread+0xe4/0xf0 [ 424.359834] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 424.360861] [<0>] 0xfffffffffffffffe [ 429.977702] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296764/real 1713296764] req@ffff880072e51f80 x1796521468192512/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296780 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 429.983725] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 445.985690] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296780/real 1713296780] req@ffff880072e51f80 x1796521468192512/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296796 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 445.991936] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 461.993711] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296796/real 1713296796] req@ffff880072e51f80 x1796521468192512/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296812 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 462.000183] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 494.001669] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713296828/real 1713296828] req@ffff880072e51f80 x1796521468192512/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713296844 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 494.009495] Lustre: 6932:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 494.011895] LustreError: 6932:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.203.54@tcp) failed to reply to blocking AST (req@ffff880072e51f80 x1796521468192512 status 0 rc -110), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff8800acd41d40/0x60b500cd916e628d lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e6ee2 expref: 9 pid: 6932 timeout: 577 lvb_type: 0 [ 494.022758] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.203.54@tcp was evicted due to a lock blocking callback time out: rc -110 [ 494.026333] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.203.54@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff8800acd41d40/0x60b500cd916e628d lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e6ee2 expref: 10 pid: 6932 timeout: 0 lvb_type: 0 [ 496.271707] LustreError: 9199:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.203.54@tcp) failed to reply to blocking AST (req@ffff880072e51c00 x1796521468193856 status 0 rc -110), evict it ns: filter-lustre-OST0001_UUID lock: ffff88012b496400/0x60b500cd916e6201 lrc: 4/0,0 mode: PW/PW res: [0x2c0000401:0x5:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4194303) gid 0 flags: 0x60000400030020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e6ec6 expref: 7 pid: 11427 timeout: 579 lvb_type: 0 [ 496.283695] LustreError: 138-a: lustre-OST0001: A client on nid 192.168.203.54@tcp was evicted due to a lock blocking callback time out: rc -110 [ 496.283769] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.203.54@tcp ns: filter-lustre-OST0001_UUID lock: ffff880084a7fcc0/0x60b500cd916e6159 lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0x4:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4194303) gid 0 flags: 0x60000400030020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e6e80 expref: 8 pid: 11427 timeout: 0 lvb_type: 0 [ 496.294773] LustreError: 9199:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) Skipped 2 previous similar messages [ 496.998687] Lustre: DEBUG MARKER: == recovery-small test 10b: re-send BL AST =============== 15:47:26 (1713296846) [ 515.937593] Lustre: DEBUG MARKER: == recovery-small test 10c: re-send BL AST vs reconnect race (LU-5569) ========================================================== 15:47:45 (1713296865) [ 517.012257] Lustre: lustre-MDT0001: Client 13e59011-7e27-4021-b556-e51a9feb2d07 (at 192.168.203.54@tcp) reconnecting [ 517.014400] Lustre: Skipped 2 previous similar messages [ 519.782337] Lustre: DEBUG MARKER: == recovery-small test 10d: test failed blocking ast ===== 15:47:48 (1713296868) [ 521.236984] LustreError: 9201:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.203.54@tcp) returned error from blocking AST (req@ffff8800a0464380 x1796521468226176 status -71 rc -71), evict it ns: filter-lustre-OST0000_UUID lock: ffff88012b494900/0x60b500cd916e6699 lrc: 4/0,0 mode: PW/PW res: [0x280000401:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e70f6 expref: 5 pid: 9201 timeout: 620 lvb_type: 0 [ 521.247184] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.203.54@tcp was evicted due to a lock blocking callback time out: rc -71 [ 521.249701] LustreError: Skipped 2 previous similar messages [ 521.250733] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.203.54@tcp ns: filter-lustre-OST0000_UUID lock: ffff88012b494900/0x60b500cd916e6699 lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e70f6 expref: 6 pid: 9201 timeout: 0 lvb_type: 0 [ 521.263007] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 524.250825] Lustre: DEBUG MARKER: == recovery-small test 10e: re-send BL AST vs reconnect race 2 ========================================================== 15:47:53 (1713296873) [ 524.563286] Lustre: DEBUG MARKER: SKIP: recovery-small test_10e need two clients [ 526.304695] Lustre: DEBUG MARKER: == recovery-small test 11: wake up a thread waiting for completion after eviction (b=2460) ========================================================== 15:47:55 (1713296875) [ 546.579512] Lustre: DEBUG MARKER: == recovery-small test 12: recover from timed out resend in ptlrpcd (b=2494) ========================================================== 15:48:15 (1713296895) [ 546.834829] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 588.960705] Lustre: DEBUG MARKER: == recovery-small test 13: mdc_readpage restart test (bug 1138) ========================================================== 15:48:58 (1713296938) [ 608.332365] Lustre: DEBUG MARKER: == recovery-small test 14: mdc_readpage resend test (bug 1138) ========================================================== 15:49:17 (1713296957) [ 608.606776] Lustre: *** cfs_fail_loc=106, val=0*** [ 608.608512] Lustre: Skipped 1 previous similar message [ 611.686595] Lustre: DEBUG MARKER: == recovery-small test 15: failed open (-ENOMEM) ========= 15:49:20 (1713296960) [ 611.917885] Lustre: *** cfs_fail_loc=128, val=0*** [ 614.756525] Lustre: DEBUG MARKER: == recovery-small test 16: timeout bulk put, don't evict client (2732) ========================================================== 15:49:23 (1713296963) [ 615.113879] Lustre: *** cfs_fail_loc=504, val=0*** [ 615.115249] LustreError: 18988:0:(ldlm_lib.c:3601:target_bulk_io()) @@@ truncated bulk READ 0(102400) req@ffff8800a11cdf80 x1796521462928000/t0(0) o3->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:391/0 lens 488/440 e 0 to 0 dl 1713296976 ref 1 fl Interpret:/200/0 rc 0/0 job:'cmp.0' uid:0 gid:0 [ 615.120773] Lustre: lustre-OST0000: Bulk IO read error with cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp), client will retry: rc -110 [ 654.338260] Lustre: DEBUG MARKER: == recovery-small test 17a: timeout bulk get, don't evict client (2732) ========================================================== 15:50:03 (1713297003) [ 698.845727] Lustre: DEBUG MARKER: == recovery-small test 17b: timeout bulk get, dont evict client (3582) ========================================================== 15:50:48 (1713297048) [ 699.167940] Lustre: DEBUG MARKER: SKIP: recovery-small test_17b Needs multiple clients [ 701.038070] Lustre: DEBUG MARKER: == recovery-small test 18a: manual ost invalidate clears page cache immediately ========================================================== 15:50:50 (1713297050) [ 704.046430] Lustre: DEBUG MARKER: == recovery-small test 18b: eviction and reconnect clears page cache (2766) ========================================================== 15:50:53 (1713297053) [ 704.462013] Lustre: 31605:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting cb7cbb66-426d-4359-9c74-d392fe50a966 at adminstrative request [ 729.542411] Lustre: DEBUG MARKER: == recovery-small test 18c: Dropped connect reply after eviction handing (14755) ========================================================== 15:51:18 (1713297078) [ 730.013271] Lustre: 32336:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting cb7cbb66-426d-4359-9c74-d392fe50a966 at adminstrative request [ 730.435538] Lustre: *** cfs_fail_loc=225, val=0*** [ 730.436558] Lustre: Skipped 1 previous similar message [ 745.568878] Lustre: DEBUG MARKER: == recovery-small test 19a: test expired_lock_main on mds (2867) ========================================================== 15:51:34 (1713297094) [ 746.169388] Lustre: *** cfs_fail_loc=304, val=0*** [ 762.195456] Lustre: *** cfs_fail_loc=304, val=0*** [ 778.195832] Lustre: lustre-MDT0000: Client cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp) reconnecting [ 778.198227] Lustre: Skipped 6 previous similar messages [ 778.201298] Lustre: *** cfs_fail_loc=304, val=0*** [ 786.190740] Lustre: mdt00_006: service thread pid 21615 was inactive for 40.023 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 786.199332] Pid: 21615, comm: mdt00_006 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 786.201977] Call Trace: [ 786.203526] [<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc] [ 786.205489] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 786.207455] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 786.209180] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 786.211103] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 786.213463] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 786.215928] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 786.218030] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 786.219682] [<0>] mdt_reint+0x67/0x150 [mdt] [ 786.221852] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 786.224024] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 786.225964] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 786.228082] [<0>] kthread+0xe4/0xf0 [ 786.229490] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 786.230600] [<0>] 0xfffffffffffffffe [ 794.203440] Lustre: *** cfs_fail_loc=304, val=0*** [ 810.208364] Lustre: *** cfs_fail_loc=304, val=0*** [ 826.219968] Lustre: *** cfs_fail_loc=304, val=0*** [ 842.223530] Lustre: *** cfs_fail_loc=304, val=0*** [ 846.350802] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.203.54@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff880084a7e880/0x60b500cd916e6fde lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e73f1 expref: 17 pid: 6931 timeout: 845 lvb_type: 0 [ 850.585230] Lustre: DEBUG MARKER: == recovery-small test 19b: test expired_lock_main on ost (2867) ========================================================== 15:53:19 (1713297199) [ 883.182385] Lustre: *** cfs_fail_loc=304, val=0*** [ 883.183616] Lustre: Skipped 5 previous similar messages [ 947.216249] Lustre: *** cfs_fail_loc=304, val=0*** [ 947.217473] Lustre: Skipped 7 previous similar messages [ 951.310706] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.203.54@tcp ns: filter-lustre-OST0001_UUID lock: ffff88012de91440/0x60b500cd916e72d9 lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0xc:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e34e75a3 expref: 6 pid: 9199 timeout: 950 lvb_type: 0 [ 954.702757] Lustre: DEBUG MARKER: == recovery-small test 19c: check reconnect and lock resend do not trigger expired_lock_main ========================================================== 15:55:03 (1713297303) [ 965.271344] Lustre: DEBUG MARKER: == recovery-small test 20a: ldlm_handle_enqueue error (should return error) ========================================================== 15:55:14 (1713297314) [ 968.719758] Lustre: DEBUG MARKER: == recovery-small test 20b: ldlm_handle_enqueue error (should return error) ========================================================== 15:55:17 (1713297317) [ 972.037904] Lustre: DEBUG MARKER: == recovery-small test 21a: drop close request while close and open are both in flight ========================================================== 15:55:21 (1713297321) [ 972.314205] LustreError: 6932:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 973.616615] LustreError: 6932:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 973.759978] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 992.961310] Lustre: DEBUG MARKER: == recovery-small test 21b: drop open request while close and open are both in flight ========================================================== 15:55:42 (1713297342) [ 1138.320499] Lustre: DEBUG MARKER: == recovery-small test 21c: drop both request while close and open are both in flight ========================================================== 15:58:07 (1713297487) [ 1161.868958] Lustre: DEBUG MARKER: == recovery-small test 21d: drop close reply while close and open are both in flight ========================================================== 15:58:31 (1713297511) [ 1162.208847] LustreError: 6931:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 1163.516633] LustreError: 6931:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 1163.719107] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 1163.721277] LustreError: 16907:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880072d8d880 x1796521463010304/t4294967534(0) o35->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:184/0 lens 392/456 e 0 to 0 dl 1713297524 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1163.728309] LustreError: 16907:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1179.722338] Lustre: 16907:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880082288850 x1796521463010304/t4294967534(0) o35->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:200/0 lens 392/456 e 0 to 0 dl 1713297540 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1184.160138] Lustre: DEBUG MARKER: == recovery-small test 21e: drop open reply while close and open are both in flight ========================================================== 15:58:53 (1713297533) [ 1184.456870] LustreError: 16976:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800776c5f80 x1796521463014976/t4294967551(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:330/0 lens 488/456 e 0 to 0 dl 1713297670 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1325.462786] Lustre: lustre-MDT0000: Client cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp) reconnecting [ 1325.465725] Lustre: Skipped 20 previous similar messages [ 1325.476530] Lustre: 6932:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a0402680 x1796521463014976/t4294967551(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:471/0 lens 488/3152 e 0 to 0 dl 1713297811 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1327.876190] Lustre: DEBUG MARKER: == recovery-small test 21f: drop both reply while close and open are both in flight ========================================================== 16:01:16 (1713297676) [ 1328.220489] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1328.223067] Lustre: Skipped 1 previous similar message [ 1328.224974] LustreError: 6932:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a0b65f80 x1796521463027392/t4294967570(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:474/0 lens 488/456 e 0 to 0 dl 1713297814 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1345.892674] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012df04a80 x1796521463027392/t4294967570(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:491/0 lens 488/3152 e 0 to 0 dl 1713297831 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1345.905998] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1350.583788] Lustre: DEBUG MARKER: == recovery-small test 21g: drop open reply and close request while close and open are both in flight ========================================================== 16:01:39 (1713297699) [ 1350.997902] LustreError: 8082:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012df98e00 x1796521463033216/t4294967589(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:497/0 lens 488/456 e 0 to 0 dl 1713297837 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1351.006077] LustreError: 8082:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1352.580332] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 1352.581847] Lustre: Skipped 3 previous similar messages [ 1368.582719] Lustre: 6932:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880072ac1c00 x1796521463033216/t4294967589(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:514/0 lens 488/3152 e 0 to 0 dl 1713297854 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1372.332728] Lustre: DEBUG MARKER: == recovery-small test 21h: drop open request and close reply while close and open are both in flight ========================================================== 16:02:01 (1713297721) [ 1394.470955] Lustre: DEBUG MARKER: == recovery-small test 22: drop close request and do mknod ========================================================== 16:02:23 (1713297743) [ 1415.063419] Lustre: DEBUG MARKER: == recovery-small test 23: client hang when close a file after mds crash ========================================================== 16:02:44 (1713297764) [ 1421.521314] Lustre: Failing over lustre-MDT0000 [ 1421.614192] Lustre: server umount lustre-MDT0000 complete [ 1422.431856] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1422.436262] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1422.443483] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1424.799832] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1424.800728] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1424.813452] LustreError: Skipped 2 previous similar messages [ 1425.844709] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.203.54@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1425.852979] LustreError: Skipped 1 previous similar message [ 1429.807836] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1429.815211] LustreError: Skipped 3 previous similar messages [ 1434.508380] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1434.566038] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1434.689971] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1434.705407] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1435.797040] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1435.858074] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1439.698947] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1439.713913] Lustre: lustre-MDT0000: Recovery over after 0:04, of 2 clients 2 recovered and 0 were evicted. [ 1439.738324] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:21 to 0x2c0000401:65) [ 1439.738341] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:23 to 0x280000401:65) [ 1440.543418] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1441.061414] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1446.785148] Lustre: DEBUG MARKER: == recovery-small test 24a: fsync error (should return error) ========================================================== 16:03:15 (1713297795) [ 1447.335828] Lustre: 14772:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting cb7cbb66-426d-4359-9c74-d392fe50a966 at adminstrative request [ 1451.532137] Lustre: DEBUG MARKER: == recovery-small test 24b: test dirty page discard due to client eviction ========================================================== 16:03:20 (1713297800) [ 1452.134128] Lustre: 15486:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting cb7cbb66-426d-4359-9c74-d392fe50a966 at adminstrative request [ 1456.875223] Lustre: DEBUG MARKER: == recovery-small test 26a: evict dead exports =========== 16:03:25 (1713297805) [ 1457.688799] Lustre: DEBUG MARKER: SKIP: recovery-small test_26a msg and ost1 are at the same node [ 1460.605396] Lustre: DEBUG MARKER: == recovery-small test 26b: evict dead exports =========== 16:03:29 (1713297809) [ 1461.136917] Lustre: DEBUG MARKER: SKIP: recovery-small test_26b msg and ost1 are at the same node [ 1463.582545] Lustre: DEBUG MARKER: == recovery-small test 27: fail LOV while using OSC's ==== 16:03:32 (1713297812) [ 1465.121728] Lustre: Failing over lustre-MDT0000 [ 1465.188270] Lustre: server umount lustre-MDT0000 complete [ 1465.906712] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.203.54@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1465.914682] LustreError: Skipped 1 previous similar message [ 1469.744027] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1469.752318] Lustre: Skipped 4 previous similar messages [ 1474.751669] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1474.760188] LustreError: Skipped 8 previous similar messages [ 1478.002788] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1478.053906] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1478.186630] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1478.207734] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1479.263142] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1480.930902] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1483.203160] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1483.207239] Lustre: Skipped 3 previous similar messages [ 1483.218614] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 1483.228717] Lustre: 8082:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a09ea680 x1796521463146176/t8589935187(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:504/0 lens 504/2888 e 0 to 0 dl 1713297844 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1483.241164] Lustre: 8082:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1483.245769] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:163 to 0x280000401:193) [ 1483.245790] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:164 to 0x2c0000401:193) [ 1571.579112] Lustre: Failing over lustre-MDT0000 [ 1571.711926] Lustre: server umount lustre-MDT0000 complete [ 1573.344175] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1573.345814] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1573.345818] LustreError: Skipped 1 previous similar message [ 1573.358920] Lustre: Skipped 4 previous similar messages [ 1583.834319] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1583.868454] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1583.960522] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1583.987035] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1584.800738] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1586.097163] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1588.961607] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1588.963636] Lustre: Skipped 3 previous similar messages [ 1588.969913] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted. [ 1588.984151] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6882 to 0x2c0000401:6913) [ 1588.988230] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6881 to 0x280000401:6913) [ 1591.990054] Lustre: DEBUG MARKER: == recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 16:05:41 (1713297941) [ 1608.061766] Lustre: 6931:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713297942/real 1713297942] req@ffff8800a04ffb80 x1796521470003520/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713297958 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 1608.073836] Lustre: 6931:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 5 previous similar messages [ 1610.085639] Lustre: Failing over lustre-MDT0000 [ 1610.147712] Lustre: server umount lustre-MDT0000 complete [ 1611.138606] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.203.54@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1611.142084] LustreError: Skipped 13 previous similar messages [ 1613.999212] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1614.001923] Lustre: Skipped 3 previous similar messages [ 1622.129530] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1622.161364] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1622.215458] Lustre: *** cfs_fail_loc=12f, val=0*** [ 1622.217025] LustreError: 6937:0:(tgt_lastrcvd.c:1071:tgt_client_new()) lustre-MDT0001: no room for 0 clients - fix LR_MAX_CLIENTS [ 1622.221963] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_connect to node 0@lo failed: rc = -75 [ 1622.239710] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1622.251551] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1622.972221] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1626.161622] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1627.248918] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1627.250919] Lustre: Skipped 3 previous similar messages [ 1627.259299] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1627.273834] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6915 to 0x280000401:6945) [ 1627.273844] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6915 to 0x2c0000401:6945) [ 1627.829972] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1628.212770] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1632.448396] Lustre: DEBUG MARKER: == recovery-small test 29a: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 16:06:21 (1713297981) [ 1633.260475] Lustre: Failing over lustre-MDT0000 [ 1633.330160] Lustre: server umount lustre-MDT0000 complete [ 1636.059330] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1636.092008] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1636.143691] Lustre: *** cfs_fail_loc=711, val=0*** [ 1636.144068] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1636.163983] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1636.174302] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1636.174466] Lustre: lustre-MDT0000: Aborting client recovery [ 1636.174469] LustreError: 27535:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1636.181716] Lustre: 27565:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1641.169115] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1641.171722] Lustre: Skipped 3 previous similar messages [ 1641.179420] Lustre: 27565:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@0@lo [ 1641.182002] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1641.183658] LustreError: 27565:0:(ldlm_lib.c:1844:abort_lock_replay_queue()) @@@ aborted: req@ffff880088a56300 x1796521470024960/t0(0) o101->lustre-MDT0001-mdtlov_UUID@0@lo:662/0 lens 328/0 e 0 to 0 dl 1713298002 ref 1 fl Complete:/240/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1641.189262] Lustre: 27565:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1641.189302] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation ldlm_enqueue to node 0@lo failed: rc = -107 [ 1641.189663] Lustre: lustre-MDT0000: Denying connection for new client lustre-MDT0001-mdtlov_UUID (at 0@lo), waiting for 2 known clients (1 recovered, 0 in progress, and 1 evicted) already passed deadline 27:20 [ 1641.197732] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1641.202814] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000401:0x1:0x0] [ 1641.225869] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:6915 to 0x280000401:6977) [ 1641.225988] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:6915 to 0x2c0000401:6977) [ 1641.946813] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1646.176362] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1652.587327] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 [ 1652.630352] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1655.604227] Lustre: DEBUG MARKER: == recovery-small test 29b: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 16:06:44 (1713298004) [ 1656.383737] Lustre: Failing over lustre-OST0000 [ 1656.401928] Lustre: server umount lustre-OST0000 complete [ 1658.489761] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 1658.492463] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 1658.540014] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1658.542886] Lustre: Skipped 3 previous similar messages [ 1658.545238] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1658.549252] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 1658.549361] Lustre: lustre-OST0000: Aborting recovery [ 1658.549365] LustreError: 29841:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 1658.554031] Lustre: Skipped 2 previous similar messages [ 1658.555018] Lustre: 29854:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1658.557027] Lustre: 29854:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 1658.558875] Lustre: 29854:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client lustre-MDT0000-mdtlov_UUID@ [ 1658.561513] Lustre: lustre-OST0000: disconnecting 3 stale clients [ 1658.564650] LustreError: 29854:0:(ofd_obd.c:1315:ofd_iocontrol()) lustre-OST0000: iocontrol from 'tgt_recover_0' cmd=c00866c1 _IOWR('f', 193, 8) unrecognized: rc = -25 [ 1659.697112] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1659.908562] Lustre: *** cfs_fail_loc=711, val=0*** [ 1659.910388] LustreError: 167-0: lustre-OST0000-osc-MDT0001: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1659.913362] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1659.915445] Lustre: Skipped 3 previous similar messages [ 1663.552416] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1672.364934] Lustre: DEBUG MARKER: == recovery-small test 50: failover MDS under load ======= 16:07:01 (1713298021) [ 1683.038124] Lustre: Failing over lustre-MDT0000 [ 1683.121441] Lustre: server umount lustre-MDT0000 complete [ 1683.584347] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1683.587967] LustreError: Skipped 13 previous similar messages [ 1695.526352] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1695.564189] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1695.658993] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1695.679951] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1695.683139] Lustre: Skipped 2 previous similar messages [ 1696.273051] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1696.509493] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1700.659828] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1700.665866] Lustre: Skipped 1 previous similar message [ 1700.679105] Lustre: lustre-MDT0000: Recovery over after 0:05, of 2 clients 2 recovered and 0 were evicted. [ 1700.686728] Lustre: 6933:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a005ca80 x1796521470219264/t25769808914(0) o36->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:5/0 lens 512/2888 e 0 to 0 dl 1713298100 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1700.707335] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:7832 to 0x280000401:7873) [ 1700.707360] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:7833 to 0x2c0000401:7873) [ 1701.266870] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1701.630331] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1763.496865] Lustre: Failing over lustre-MDT0000 [ 1763.620949] Lustre: server umount lustre-MDT0000 complete [ 1765.760341] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1765.768109] Lustre: Skipped 7 previous similar messages [ 1776.392511] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1776.443684] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1776.550308] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1776.564784] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1777.687652] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1781.409349] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1781.552771] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1781.554559] Lustre: Skipped 3 previous similar messages [ 1781.560118] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1781.573975] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:12446 to 0x280000401:12481) [ 1781.573979] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:12445 to 0x2c0000401:12481) [ 1782.165302] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1782.578237] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1844.301250] Lustre: Failing over lustre-MDT0000 [ 1844.325692] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.54@tcp (stopping) [ 1844.409181] Lustre: server umount lustre-MDT0000 complete [ 1846.514540] LustreError: 137-5: lustre-MDT0000: not available for connect from 192.168.203.54@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1846.525239] LustreError: Skipped 25 previous similar messages [ 1846.704475] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1846.704799] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1846.704802] Lustre: Skipped 1 previous similar message [ 1857.256862] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1857.305711] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1857.403609] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1857.426877] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1858.539412] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1861.537408] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1862.417283] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 1862.419641] Lustre: Skipped 3 previous similar messages [ 1862.425289] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 1862.441712] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:17465 to 0x280000401:17505) [ 1862.441717] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:17466 to 0x2c0000401:17505) [ 1863.004670] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1863.414086] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1887.594136] Lustre: DEBUG MARKER: == recovery-small test 51: failover MDS during recovery == 16:10:36 (1713298236) [ 1889.367256] Lustre: Failing over lustre-MDT0000 [ 1889.389037] LustreError: 3493:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800a0a6f800 x1796521472821504/t0(0) o6->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 544/432 e 0 to 0 dl 0 ref 1 fl Rpc:QU/200/ffffffff rc 0/-1 job:'osp-syn-0-0.0' uid:0 gid:0 [ 1889.436285] Lustre: server umount lustre-MDT0000 complete [ 1901.413182] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1901.601207] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1902.380222] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1903.190461] Lustre: DEBUG MARKER: test_51: failover in 1 sec [ 1904.668467] Lustre: Failing over lustre-MDT0000 [ 1904.678888] LustreError: 4537:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1904.680753] Lustre: 3955:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1904.682793] Lustre: 3955:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1904.685194] Lustre: lustre-MDT0000-osd: cancel update llog [0x200002b10:0x1:0x0] [ 1904.690678] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1904.693630] LustreError: 3955:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff880072dced80 x1796521472886720/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1904.698718] LustreError: 3955:0:(client.c:1281:ptlrpc_import_delay_req()) Skipped 9 previous similar messages [ 1904.700728] LustreError: 3955:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1904.704180] LustreError: 3955:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1904.706834] Lustre: lustre-MDT0000: Recovery over after 0:03, of 2 clients 0 recovered and 2 were evicted. [ 1904.720117] Lustre: 3955:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1904.770866] Lustre: server umount lustre-MDT0000 complete [ 1916.649826] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1917.478309] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1918.274909] Lustre: DEBUG MARKER: test_51: failover in 5 sec [ 1923.959465] Lustre: Failing over lustre-MDT0000 [ 1923.970209] LustreError: 5679:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1923.975061] Lustre: 5105:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1923.979857] Lustre: 5105:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1923.984244] Lustre: lustre-MDT0000-osd: cancel update llog [0x200004a50:0x1:0x0] [ 1923.994303] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1924.000702] LustreError: 5105:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff880084825500 x1796521472920576/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1924.011271] LustreError: 5105:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1924.017071] LustreError: 5105:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1924.042269] Lustre: 5105:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1924.125577] Lustre: server umount lustre-MDT0000 complete [ 1936.780840] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1936.837750] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1936.842316] LustreError: Skipped 2 previous similar messages [ 1937.933683] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713298271/real 1713298271] req@ffff8800a02bfb80 x1796521472919936/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 0 to 1 dl 1713298287 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1938.003168] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1939.217738] Lustre: DEBUG MARKER: test_51: failover in 10 sec [ 1949.731782] Lustre: Failing over lustre-MDT0000 [ 1949.739529] LustreError: 6830:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1949.745519] Lustre: 6248:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1949.748774] Lustre: 6248:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1949.752481] Lustre: lustre-MDT0000-osd: cancel update llog [0x200005220:0x1:0x0] [ 1949.760540] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 1949.765843] LustreError: 6248:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8800a5cf1180 x1796521472930496/t0(0) o700->lustre-MDT0001-osp-MDT0000@0@lo:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 [ 1949.772432] LustreError: 6248:0:(fid_request.c:233:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5 [ 1949.776299] LustreError: 6248:0:(fid_request.c:335:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5 [ 1949.794158] Lustre: 6248:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1949.848981] Lustre: server umount lustre-MDT0000 complete [ 1961.758944] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1962.744301] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1963.784406] Lustre: DEBUG MARKER: test_51: failover in 20 sec [ 1965.870834] Lustre: 3491:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713298287/real 1713298287] req@ffff88008c410700 x1796521472927168/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 1 to 1 dl 1713298315 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 [ 1965.885269] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 1965.890291] Lustre: Skipped 2 previous similar messages [ 1966.734613] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:19351 to 0x280000401:19393) [ 1966.734674] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:19352 to 0x2c0000401:19393) [ 1984.247696] Lustre: Failing over lustre-MDT0000 [ 1984.348788] Lustre: server umount lustre-MDT0000 complete [ 1986.751396] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 1986.755794] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1986.761653] Lustre: Skipped 13 previous similar messages [ 1996.902332] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 1997.020007] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1997.023482] Lustre: Skipped 4 previous similar messages [ 1997.051620] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1997.053468] Lustre: Skipped 4 previous similar messages [ 1998.123894] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 1999.363542] Lustre: DEBUG MARKER: test_51: failover in 25 sec [ 2002.034822] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 2002.038820] Lustre: Skipped 13 previous similar messages [ 2002.051246] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 2002.056354] Lustre: Skipped 3 previous similar messages [ 2002.079683] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:20837 to 0x280000401:20865) [ 2002.079693] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:20838 to 0x2c0000401:20865) [ 2024.834482] Lustre: Failing over lustre-MDT0000 [ 2024.865921] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.54@tcp (stopping) [ 2024.965576] Lustre: server umount lustre-MDT0000 complete [ 2037.716706] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2039.010780] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 2040.307537] Lustre: DEBUG MARKER: test_51: failover in 30 sec [ 2042.911011] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:22482 to 0x2c0000401:22497) [ 2042.911042] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:22481 to 0x280000401:22497) [ 2070.833105] Lustre: Failing over lustre-MDT0000 [ 2070.935934] Lustre: server umount lustre-MDT0000 complete [ 2083.618443] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 2083.667970] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2083.671733] LustreError: Skipped 3 previous similar messages [ 2084.782529] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 2088.800415] Lustre: 16976:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88011e739c50 x1796521485857792/t51539620947(0) o101->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:393/0 lens 672/3488 e 0 to 0 dl 1713298488 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 2088.818549] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24722 to 0x280000401:24737) [ 2088.819227] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:24722 to 0x2c0000401:24737) [ 2109.819124] Lustre: DEBUG MARKER: == recovery-small test 52: failover OST under load ======= 16:14:18 (1713298458) [ 2120.592025] Lustre: Failing over lustre-OST0000 [ 2120.614909] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2120.615212] Lustre: server umount lustre-OST0000 complete [ 2120.618329] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2120.624533] LustreError: Skipped 100 previous similar messages [ 2132.606841] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2132.610359] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2133.929389] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 2133.933442] Lustre: Skipped 3 previous similar messages [ 2134.178056] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 2134.215481] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 2134.219527] Lustre: Skipped 2 previous similar messages [ 2136.553660] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2136.956607] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2451.515650] Lustre: Failing over lustre-OST0000 [ 2451.527627] Lustre: lustre-OST0000: Not available for connect from 192.168.203.54@tcp (stopping) [ 2451.537304] Lustre: server umount lustre-OST0000 complete [ 2452.059471] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_create to node 0@lo failed: rc = -107 [ 2452.062371] Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2452.065979] Lustre: Skipped 13 previous similar messages [ 2463.578453] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2463.581676] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2463.638275] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2463.641856] Lustre: Skipped 3 previous similar messages [ 2463.646250] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 2463.647931] Lustre: Skipped 3 previous similar messages [ 2464.818721] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 2464.956001] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 2465.607512] Lustre: lustre-OST0000-osc-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 2465.607600] Lustre: lustre-OST0000: Recovery over after 0:01, of 3 clients 3 recovered and 0 were evicted. [ 2465.612619] Lustre: Skipped 13 previous similar messages [ 2467.359383] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2467.737153] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2782.431457] Lustre: Failing over lustre-OST0000 [ 2782.454846] Lustre: server umount lustre-OST0000 complete [ 2782.474956] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.203.54@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2782.478513] LustreError: Skipped 17 previous similar messages [ 2795.001168] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 2795.003971] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 2795.054735] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 2796.449700] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 2798.781551] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2799.156979] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3017.427100] Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x2c0000400 to 0x2c0000402 [ 3062.223770] Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x280000400 to 0x280000bd0 [ 3075.899693] Lustre: DEBUG MARKER: == recovery-small test 53a: touch: drop rep ============== 16:30:24 (1713299424) [ 3076.265719] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3076.267922] Lustre: Skipped 3 previous similar messages [ 3076.270265] LustreError: 8082:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009a081c00 x1796521550747008/t0(0) o101->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:587/0 lens 576/688 e 0 to 0 dl 1713299437 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3076.278360] LustreError: 8082:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 3092.280014] Lustre: lustre-MDT0000: Client cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp) reconnecting [ 3092.286286] Lustre: Skipped 4 previous similar messages [ 3095.868671] Lustre: DEBUG MARKER: == recovery-small test 53b: touch: drop rep ============== 16:30:44 (1713299444) [ 3096.358366] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012da65880 x1796521550754112/t0(0) o101->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:607/0 lens 576/688 e 0 to 0 dl 1713299457 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3116.655041] Lustre: DEBUG MARKER: == recovery-small test 53c: touch: drop rep ============== 16:31:05 (1713299465) [ 3117.165178] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3117.168323] Lustre: Skipped 1 previous similar message [ 3117.170819] LustreError: 9223:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a5d1c000 x1796521550756032/t55834582104(0) o101->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:628/0 lens 664/664 e 0 to 0 dl 1713299478 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3133.163892] Lustre: 16976:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a5cf2d80 x1796521550756032/t55834582104(0) o101->cb7cbb66-426d-4359-9c74-d392fe50a966@192.168.203.54@tcp:644/0 lens 664/3488 e 0 to 0 dl 1713299494 ref 1 fl Interpret:H/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3136.781093] Lustre: DEBUG MARKER: == recovery-small test 54: back in time ================== 16:31:25 (1713299485) [ 3147.573088] Lustre: Failing over lustre-MDT0000 [ 3147.655605] Lustre: server umount lustre-MDT0000 complete [ 3150.495694] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3150.500692] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3150.509974] Lustre: Skipped 3 previous similar messages [ 3160.187901] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3160.246306] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3160.337650] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3160.351762] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3160.354116] Lustre: Skipped 1 previous similar message [ 3161.469830] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3161.906163] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 3 clients reconnect [ 3161.912464] Lustre: Skipped 1 previous similar message [ 3165.348566] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 3165.355009] Lustre: Skipped 3 previous similar messages [ 3165.366498] Lustre: lustre-MDT0000: Recovery over after 0:03, of 3 clients 3 recovered and 0 were evicted. [ 3165.371625] Lustre: Skipped 1 previous similar message [ 3165.395648] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25942 to 0x280000401:25985) [ 3165.395673] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:25943 to 0x2c0000401:25985) [ 3166.213928] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3166.790487] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3172.699820] Lustre: DEBUG MARKER: == recovery-small test 55: ost_brw_read/write drops timed-out read/write request ========================================================== 16:32:01 (1713299521) [ 3177.759000] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3177.760792] Lustre: Skipped 3 previous similar messages [ 3177.762638] LustreError: 9205:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.203.54@tcp because locking object 0x280000bd0:844 took 0 seconds (limit was 11). [ 3177.769101] Lustre: lustre-OST0000: Bulk IO write error with cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp), client will retry: rc = -110 [ 3193.954471] Lustre: lustre-OST0000: Client cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp) reconnecting [ 3193.959556] Lustre: Skipped 2 previous similar messages [ 3193.967642] LustreError: 18988:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.203.54@tcp because locking object 0x280000bd0:844 took 0 seconds (limit was 11). [ 3193.967668] Lustre: lustre-OST0000: Bulk IO write error with cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp), client will retry: rc = -110 [ 3193.967670] Lustre: Skipped 8 previous similar messages [ 3193.983876] LustreError: 18988:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 16 previous similar messages [ 3209.972241] LustreError: 9205:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.203.54@tcp because locking object 0x280000bd0:844 took 0 seconds (limit was 11). [ 3209.972873] Lustre: lustre-OST0000: Bulk IO write error with cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp), client will retry: rc = -110 [ 3209.972876] Lustre: Skipped 8 previous similar messages [ 3209.989718] LustreError: 9205:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3225.999390] LustreError: 9206:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.203.54@tcp because locking object 0x280000bd0:843 took 0 seconds (limit was 11). [ 3225.999426] Lustre: lustre-OST0000: Bulk IO write error with cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp), client will retry: rc = -110 [ 3225.999429] Lustre: Skipped 8 previous similar messages [ 3226.016748] LustreError: 9206:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3242.003207] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3242.003286] LustreError: 9206:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.203.54@tcp because locking object 0x280000bd0:844 took 0 seconds (limit was 11). [ 3242.003290] LustreError: 9206:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 1 previous similar message [ 3242.003318] Lustre: lustre-OST0000: Bulk IO write error with cb7cbb66-426d-4359-9c74-d392fe50a966 (at 192.168.203.54@tcp), client will retry: rc = -110 [ 3242.003320] Lustre: Skipped 9 previous similar messages [ 3242.028465] Lustre: Skipped 45 previous similar messages [ 3266.353279] Lustre: DEBUG MARKER: == recovery-small test 56: do not fail on getattr resend ========================================================== 16:33:35 (1713299615) [ 3266.780235] LustreError: 8082:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3306.785699] LustreError: 8082:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 awake [ 3311.346389] Lustre: DEBUG MARKER: == recovery-small test 57: read procfs entries causes kernel crash ========================================================== 16:34:20 (1713299660) [ 3313.299143] Lustre: Failing over lustre-MDT0000 [ 3313.383942] Lustre: server umount lustre-MDT0000 complete [ 3316.447061] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3316.503138] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3316.620423] Lustre: lustre-MDT0000: Aborting client recovery [ 3316.622423] LustreError: 23969:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3316.625792] Lustre: 23999:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3316.630981] Lustre: 23999:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3316.635928] Lustre: 23999:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client lustre-MDT0001-mdtlov_UUID@ [ 3316.642846] Lustre: 23999:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 2 previous similar messages [ 3316.648247] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 3316.653733] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000059f0:0x1:0x0] [ 3316.662404] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000405:0x1:0x0] [ 3316.696353] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25987 to 0x280000401:26017) [ 3316.696491] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:25943 to 0x2c0000401:26017) [ 3317.793643] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3321.620940] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 3331.522890] Lustre: DEBUG MARKER: == recovery-small test 58: Eviction in the middle of open RPC reply processing ========================================================== 16:34:40 (1713299680) [ 3348.679728] Lustre: 6931:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713299682/real 1713299682] req@ffff880129c7a300 x1796521489276992/t0(0) o104->lustre-MDT0000@192.168.203.54@tcp:15/16 lens 328/224 e 0 to 1 dl 1713299698 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 3353.309108] Lustre: DEBUG MARKER: == recovery-small test 59: Read cancel race on client eviction ========================================================== 16:35:02 (1713299702) [ 3363.850325] LustreError: 17212:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.203.54@tcp) returned error from blocking AST (req@ffff8800a09ea680 x1796521489283648 status -107 rc -107), evict it ns: filter-lustre-OST0000_UUID lock: ffff880089ff0240/0x60b500cd9244579c lrc: 4/0,0 mode: PW/PW res: [0x280000401:0x65a2:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e36f76e9 expref: 5 pid: 9200 timeout: 3463 lvb_type: 0 [ 3363.867899] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.203.54@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3363.872012] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.203.54@tcp ns: filter-lustre-OST0000_UUID lock: ffff880089ff0240/0x60b500cd9244579c lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x65a2:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e36f76e9 expref: 6 pid: 9200 timeout: 0 lvb_type: 0 [ 3363.883483] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 3368.439320] Lustre: DEBUG MARKER: == recovery-small test 60: Add Changelog entries during MDS failover ========================================================== 16:35:17 (1713299717) [ 3368.507269] LustreError: 6931:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.203.54@tcp) returned error from blocking AST (req@ffff88012bcac700 x1796521489284800 status -107 rc -107), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff880089ff1440/0x60b500cd924457b8 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e36f76f7 expref: 6 pid: 21615 timeout: 3467 lvb_type: 0 [ 3368.526892] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.203.54@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3368.532903] LustreError: 6923:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.203.54@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff880089ff1440/0x60b500cd924457b8 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.203.54@tcp remote: 0x8583d892e36f76f7 expref: 7 pid: 21615 timeout: 0 lvb_type: 0 [ 3369.564397] Lustre: lustre-MDD0000: changelog on [ 3370.591111] Lustre: lustre-MDD0001: changelog on [ 3384.962963] Lustre: lustre-MDT0001: haven't heard from client 93c46467-0328-4970-9a20-9877b104fa63 (at 192.168.203.54@tcp) in 32 seconds. I think it's dead, and I am evicting it. exp ffff8800a7421800, cur 1713299735 expire 1713299705 last 1713299703 [ 3398.802844] Lustre: Failing over lustre-MDT0000 [ 3398.822262] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.54@tcp (stopping) [ 3398.906389] Lustre: server umount lustre-MDT0000 complete [ 3401.743534] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3401.750569] LustreError: Skipped 29 previous similar messages [ 3411.721282] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3411.767288] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3411.863443] Lustre: lustre-MDD0000: changelog on [ 3412.778816] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3416.848120] LustreError: 3491:0:(import.c:1314:ptlrpc_connect_interpret()) lustre-MDT0000_UUID: went back in time (transno 60129542151 was previously committed, server now claims 55834582110)! [ 3416.855886] LustreError: 3491:0:(import.c:1316:ptlrpc_connect_interpret()) For further information, see http://doc.lustre.org/lustre_manual.xhtml#went_back_in_time [ 3416.894229] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27303 to 0x280000401:27329) [ 3416.894259] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27303 to 0x2c0000401:27329) [ 3447.358896] Lustre: lustre-MDD0000: changelog off [ 3448.285980] Lustre: lustre-MDD0001: changelog off [ 3454.484195] Lustre: DEBUG MARKER: == recovery-small test 61: Verify to not reuse orphan objects - bug 17025 ========================================================== 16:36:43 (1713299803) [ 3457.631920] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3458.770348] Lustre: Failing over lustre-MDT0000 [ 3458.864297] Lustre: server umount lustre-MDT0000 complete [ 3463.438597] LDISKFS-fs (dm-0): recovery complete [ 3463.441090] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3463.592650] Lustre: lustre-MDT0000: Aborting client recovery [ 3463.593723] LustreError: 31325:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3463.595637] Lustre: 31353:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3463.600518] Lustre: 31353:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3463.604697] Lustre: 31353:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 82b350bc-cff9-4731-b512-74434e3643ca@ [ 3463.610941] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 3463.615933] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000088d0:0x1:0x0] [ 3463.619093] Lustre: lustre-MDT0000: Denying connection for new client 82b350bc-cff9-4731-b512-74434e3643ca (at 192.168.203.54@tcp), waiting for 2 known clients (0 recovered, 0 in progress, and 2 evicted) already passed deadline 57:43 [ 3463.634120] Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000407:0x1:0x0] [ 3463.666388] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27303 to 0x2c0000401:27361) [ 3463.666403] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27303 to 0x280000401:27361) [ 3464.808206] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3468.596790] LustreError: 167-0: lustre-MDT0000-osp-MDT0001: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 3478.058425] Lustre: DEBUG MARKER: == recovery-small test 65: lock enqueue for destroyed export ========================================================== 16:37:07 (1713299827) [ 3478.404452] LustreError: 18802:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3478.407648] Lustre: *** cfs_fail_loc=31e, val=0*** [ 3478.408739] Lustre: Skipped 2 previous similar messages [ 3480.413956] LustreError: 9201:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3482.744696] Lustre: 32714:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 82b350bc-cff9-4731-b512-74434e3643ca at adminstrative request [ 3482.751193] LustreError: 9216:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 3484.406684] LustreError: 18802:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e awake [ 3484.410963] LustreError: 18802:0:(ldlm_lockd.c:1499:ldlm_handle_enqueue()) ### lock on destroyed export ffff8800a63b9800 ns: filter-lustre-OST0000_UUID lock: ffff8800930d33c0/0x60b500cd924abe13 lrc: 3/0,0 mode: --/PW res: [0x280000401:0x6ae3:0x0].0x0 rrc: 4 type: EXT [0->4095] (req 0->4095) gid 0 flags: 0x70000000020020 nid: 192.168.203.54@tcp remote: 0x8583d892e370422f expref: 4 pid: 18802 timeout: 0 lvb_type: 0 [ 3485.118665] LustreError: 9201:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout interrupted [ 3494.140782] Lustre: lustre-OST0000: Client 376074ea-2843-44e8-8fd1-e77ecc424521 (at 192.168.203.54@tcp) reconnecting [ 3494.144790] Lustre: Skipped 6 previous similar messages [ 3498.592830] Lustre: DEBUG MARKER: == recovery-small test 66: lock enqueue re-send vs client eviction ========================================================== 16:37:27 (1713299847) [ 3499.199031] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3499.201680] LustreError: 6932:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88013073c380 x1796521552920512/t0(0) o101->82b350bc-cff9-4731-b512-74434e3643ca@192.168.203.54@tcp:299/0 lens 576/688 e 0 to 0 dl 1713299904 ref 1 fl Interpret:/200/0 rc 0/0 job:'stat.0' uid:0 gid:0 [ 3501.087080] LustreError: 6932:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3503.423611] Lustre: 1204:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 82b350bc-cff9-4731-b512-74434e3643ca at adminstrative request [ 3503.888613] LustreError: 6932:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout interrupted [ 3503.893014] LustreError: 6932:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) Skipped 1 previous similar message [ 3508.157647] Lustre: DEBUG MARKER: == recovery-small test 67: connect vs import invalidate race ========================================================== 16:37:37 (1713299857) [ 3510.536458] Lustre: 1984:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 82b350bc-cff9-4731-b512-74434e3643ca at adminstrative request [ 3526.548241] Lustre: DEBUG MARKER: == recovery-small test 100: IR: Make sure normal recovery still works w/o IR ========================================================== 16:37:55 (1713299875) [ 3528.235330] Lustre: Failing over lustre-OST0000 [ 3528.277146] Lustre: server umount lustre-OST0000 complete [ 3528.656423] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3540.961623] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3540.967413] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3541.082447] mount.lustre (3523) used greatest stack depth: 10032 bytes left [ 3542.764299] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3546.907516] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3547.483308] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3554.078633] Lustre: DEBUG MARKER: == recovery-small test 101a: IR: Make sure IR works w/o normal recovery ========================================================== 16:38:23 (1713299903) [ 3555.436418] Lustre: Failing over lustre-OST0000 [ 3555.459291] Lustre: server umount lustre-OST0000 complete [ 3568.255945] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3568.262255] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3568.366424] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3570.149451] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3572.995358] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3573.560297] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3580.050648] Lustre: DEBUG MARKER: == recovery-small test 101b: IR: Make sure IR works w/o normal recovery and proceed EAGAIN ========================================================== 16:38:49 (1713299929) [ 3581.777115] Lustre: Failing over lustre-OST0000 [ 3581.802243] Lustre: server umount lustre-OST0000 complete [ 3594.739853] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3594.747360] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3594.848575] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3594.858762] LustreError: 8085:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 sleeping for 25000ms [ 3619.865693] LustreError: 8085:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 awake [ 3621.555207] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3624.400479] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3624.965780] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3630.730844] Lustre: DEBUG MARKER: == recovery-small test 102: IR: New client gets updated nidtbl after MGS restart ========================================================== 16:39:39 (1713299979) [ 3632.001715] Lustre: Failing over lustre-OST0000 [ 3632.032381] Lustre: server umount lustre-OST0000 complete [ 3644.835358] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3644.841394] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3644.948128] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3646.761401] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3649.597234] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3650.181942] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3653.201382] Lustre: Failing over lustre-MDT0000 [ 3653.286132] Lustre: server umount lustre-MDT0000 complete [ 3653.903566] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 3656.016622] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3656.075487] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3656.080050] LustreError: Skipped 1 previous similar message [ 3657.338518] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3658.750430] Lustre: Failing over lustre-OST0000 [ 3658.773291] Lustre: server umount lustre-OST0000 complete [ 3661.197664] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27363 to 0x2c0000401:27393) [ 3671.592951] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3671.600029] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3672.747798] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27365 to 0x280000401:27393) [ 3673.448083] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3676.050477] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3682.032109] Lustre: DEBUG MARKER: == recovery-small test 103: IR: MDS can start w/o MGS and get updated nidtbl later ========================================================== 16:40:31 (1713300031) [ 3682.929311] Lustre: DEBUG MARKER: SKIP: recovery-small test_103 needs separate mgs and mds [ 3685.705534] Lustre: DEBUG MARKER: == recovery-small test 104: IR: ost can disable IR voluntarily ========================================================== 16:40:34 (1713300034) [ 3687.030786] Lustre: Failing over lustre-OST0000 [ 3687.057342] Lustre: server umount lustre-OST0000 complete [ 3687.775419] LustreError: 11-0: lustre-OST0000-osc-MDT0001: operation ost_statfs to node 0@lo failed: rc = -107 [ 3687.780210] LustreError: Skipped 1 previous similar message [ 3690.087608] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 3690.093205] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 3691.828639] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3698.810125] Lustre: DEBUG MARKER: == recovery-small test 105: IR: NON IR clients support === 16:40:47 (1713300047) [ 3699.309447] Lustre: DEBUG MARKER: SKIP: recovery-small test_105 Needs multiple clients [ 3701.862685] Lustre: DEBUG MARKER: == recovery-small test 106: lightweight connection support ========================================================== 16:40:51 (1713300051) [ 3705.071391] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3705.792975] Lustre: Failing over lustre-MDT0000 [ 3705.875613] Lustre: server umount lustre-MDT0000 complete [ 3719.423518] LDISKFS-fs (dm-0): recovery complete [ 3719.424744] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3720.506387] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3724.528513] LustreError: 18875:0:(ldlm_lockd.c:968:ldlm_server_blocking_ast()) ### BUG 6063: lock collide during recovery ns: mdt-lustre-MDT0000_UUID lock: ffff8800930d2ac0/0x60b500cd924acb2c lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x40200000000020 nid: 192.168.203.54@tcp remote: 0x8583d892e3704626 expref: 7 pid: 6932 timeout: 0 lvb_type: 0 [ 3724.581885] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27365 to 0x280000401:27425) [ 3724.581895] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27395 to 0x2c0000401:27425) [ 3729.653736] Lustre: DEBUG MARKER: == recovery-small test 107: drop reint reply, then restart MDT ========================================================== 16:41:18 (1713300078) [ 3730.031215] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 3730.034790] LustreError: 6933:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012bff1180 x1796521552955456/t81604378628(0) o36->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:530/0 lens 552/448 e 0 to 0 dl 1713300135 ref 1 fl Interpret:/200/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3731.002076] Lustre: Failing over lustre-MDT0000 [ 3731.081786] Lustre: server umount lustre-MDT0000 complete [ 3743.806917] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 3744.969664] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 3748.988622] Lustre: 16976:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012c7ed500 x1796521552955456/t81604378628(0) o36->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:549/0 lens 552/2880 e 0 to 0 dl 1713300154 ref 1 fl Interpret:/202/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3749.006508] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27365 to 0x280000401:27457) [ 3749.006549] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27395 to 0x2c0000401:27457) [ 3749.802843] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3750.350369] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3755.743935] Lustre: DEBUG MARKER: == recovery-small test 108: client eviction don't crash == 16:41:44 (1713300104) [ 3756.122816] Lustre: 22034:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting c7885d2c-e89a-473e-ac25-d7fe74882afe at adminstrative request [ 3756.130219] LustreError: 18988:0:(ldlm_lib.c:3536:target_bulk_io()) @@@ bulk WRITE failed: rc = -107 req@ffff88012cf7d880 x1796521552959872/t0(0) o4->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:512/0 lens 488/448 e 0 to 0 dl 1713300117 ref 1 fl Interpret:/200/0 rc 0/0 job:'dd.0' uid:0 gid:0 [ 3756.135149] Lustre: lustre-OST0000: Bulk IO write error with c7885d2c-e89a-473e-ac25-d7fe74882afe (at 192.168.203.54@tcp), client will retry: rc = -107 [ 3756.137621] Lustre: Skipped 9 previous similar messages [ 3762.579675] Lustre: DEBUG MARKER: == recovery-small test 110a: create remote directory: drop client req ========================================================== 16:41:51 (1713300111) [ 3763.611920] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 3823.631108] Lustre: lustre-MDT0000: Client c7885d2c-e89a-473e-ac25-d7fe74882afe (at 192.168.203.54@tcp) reconnecting [ 3823.636314] Lustre: Skipped 2 previous similar messages [ 3828.483579] Lustre: DEBUG MARKER: == recovery-small test 110b: create remote directory: drop Master rep ========================================================== 16:42:57 (1713300177) [ 3828.879444] LustreError: 8082:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a0b70e00 x1796521552969152/t4295389115(0) o36->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:628/0 lens 560/536 e 0 to 0 dl 1713300233 ref 1 fl Interpret:/200/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 3888.861072] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009fcad880 x1796521552969152/t4295389115(0) o36->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:688/0 lens 560/2880 e 0 to 0 dl 1713300293 ref 1 fl Interpret:/202/0 rc 0/0 job:'lfs.0' uid:0 gid:0 [ 3893.753413] Lustre: DEBUG MARKER: == recovery-small test 110c: create remote directory: drop update rep on slave MDT ========================================================== 16:44:02 (1713300242) [ 3910.185760] Lustre: 8080:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713300244/real 1713300244] req@ffff8800a04e3100 x1796521489780544/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 264/4320 e 0 to 1 dl 1713300260 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 3910.199775] Lustre: lustre-MDT0000-osp-MDT0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3910.206795] Lustre: Skipped 39 previous similar messages [ 3910.210548] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 3910.215624] Lustre: lustre-MDT0000-osp-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 3910.220057] Lustre: Skipped 38 previous similar messages [ 3915.276309] Lustre: DEBUG MARKER: == recovery-small test 110d: remove remote directory: drop client req ========================================================== 16:44:24 (1713300264) [ 3980.728801] Lustre: DEBUG MARKER: == recovery-small test 110e: remove remote directory: drop master rep ========================================================== 16:45:29 (1713300329) [ 3981.239705] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a57cc380 x1796521552985216/t4295389134(0) o36->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:26/0 lens 496/456 e 0 to 0 dl 1713300386 ref 1 fl Interpret:/200/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 3981.252214] LustreError: 6931:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 4041.228063] Lustre: 16976:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a02f3480 x1796521552985216/t4295389134(0) o36->c7885d2c-e89a-473e-ac25-d7fe74882afe@192.168.203.54@tcp:86/0 lens 496/2888 e 0 to 0 dl 1713300446 ref 1 fl Interpret:/202/0 rc 0/0 job:'rm.0' uid:0 gid:0 [ 4046.143755] Lustre: DEBUG MARKER: == recovery-small test 110f: remove remote directory: drop slave rep ========================================================== 16:46:35 (1713300395) [ 4046.682398] Lustre: *** cfs_fail_loc=1701, val=2147483648*** [ 4046.685068] Lustre: Skipped 3 previous similar messages [ 4062.680743] Lustre: 8080:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713300396/real 1713300396] req@ffff88008c4bea00 x1796521489827968/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 2336/4320 e 0 to 1 dl 1713300412 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4062.695834] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4067.390745] Lustre: DEBUG MARKER: == recovery-small test 110g: drop reply during migration ========================================================== 16:46:56 (1713300416) [ 4132.604869] Lustre: DEBUG MARKER: == recovery-small test 110h: drop update reply during cross-MDT file rename ========================================================== 16:48:01 (1713300481) [ 4149.184737] Lustre: 8080:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713300483/real 1713300483] req@ffff88006e02ed80 x1796521489857344/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1816/4320 e 0 to 1 dl 1713300499 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4149.203186] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4154.066054] Lustre: DEBUG MARKER: == recovery-small test 110i: drop update reply during cross-MDT dir rename ========================================================== 16:48:23 (1713300503) [ 4170.627621] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4175.484629] Lustre: DEBUG MARKER: == recovery-small test 110j: drop update reply during cross-MDT ln ========================================================== 16:48:44 (1713300524) [ 4191.999699] Lustre: 8080:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713300526/real 1713300526] req@ffff8800a011ce00 x1796521489874688/t0(0) o1000->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 1488/4320 e 0 to 1 dl 1713300542 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp_up0-1.0' uid:0 gid:0 [ 4192.013653] Lustre: 8080:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 4192.019316] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4196.817678] Lustre: DEBUG MARKER: == recovery-small test 110k: FID_QUERY failed during recovery ========================================================== 16:49:05 (1713300545) [ 4197.611543] Lustre: Failing over lustre-MDT0001 [ 4197.727649] Lustre: server umount lustre-MDT0001 complete [ 4199.663514] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 4199.667988] LustreError: Skipped 1 previous similar message [ 4199.671112] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4199.678723] LustreError: Skipped 82 previous similar messages [ 4201.222428] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4201.374151] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 4201.386340] Lustre: *** cfs_fail_loc=1103, val=0*** [ 4201.388108] Lustre: lustre-MDT0001: in recovery but waiting for the first client to connect [ 4201.388109] Lustre: Skipped 16 previous similar messages [ 4201.388401] Lustre: lustre-MDT0001: Aborting client recovery [ 4201.388407] LustreError: 31701:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0001: Aborting recovery [ 4201.388451] Lustre: 31723:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4201.388452] Lustre: 31723:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 4203.441273] Lustre: 31723:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0001: disconnect stale client lustre-MDT0000-mdtlov_UUID@ [ 4203.447679] Lustre: 31723:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 4203.452119] Lustre: lustre-MDT0001: disconnecting 1 stale clients [ 4203.455533] Lustre: 31723:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4203.462049] Lustre: lustre-MDT0001-osd: cancel update llog [0x240000400:0x1:0x0] [ 4203.471626] Lustre: lustre-MDT0000-osp-MDT0001: cancel update llog [0x200000401:0x1:0x0] [ 4203.504824] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:6056 to 0x2c0000402:6145) [ 4203.505088] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:2062 to 0x280000bd0:2561) [ 4204.636079] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 4206.343620] Lustre: Failing over lustre-MDT0001 [ 4206.383826] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 4206.387516] Lustre: Skipped 2 previous similar messages [ 4206.440628] Lustre: server umount lustre-MDT0001 complete [ 4209.680206] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 4209.827462] Lustre: lustre-MDT0001: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 4209.844379] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:2062 to 0x280000bd0:2593) [ 4209.844442] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:6056 to 0x2c0000402:6177) [ 4210.980261] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 4214.834164] LustreError: 167-0: lustre-MDT0001-osp-MDT0000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. [ 4223.888518] Lustre: DEBUG MARKER: == recovery-small test 110m: update resent vs original RPC race ========================================================== 16:49:32 (1713300572) [ 4224.696370] LustreError: 8084:0:(out_handler.c:1172:out_handle()) cfs_race id 525 sleeping [ 4228.603059] Lustre: lustre-MDT0000: Received new MDS connection from 0@lo, keep former export from same NID [ 4228.609343] LustreError: 6937:0:(service.c:1855:ptlrpc_server_request_add()) cfs_fail_race id 525 waking [ 4228.613516] LustreError: 8084:0:(out_handler.c:1172:out_handle()) cfs_fail_race id 525 awake: rc=1088 [ 4232.615087] LustreError: 6937:0:(out_handler.c:1172:out_handle()) cfs_fail_race id 525 waking [ 4237.029367] Lustre: DEBUG MARKER: == recovery-small test 111: mdd setup fail should not cause umount oops ========================================================== 16:49:46 (1713300586) [ 4238.017681] Lustre: Failing over lustre-MDT0000 [ 4238.025862] LustreError: 6918:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713300588 with bad export cookie 6968476881348393370 [ 4238.027124] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4238.098107] Lustre: server umount lustre-MDT0000 complete [ 4241.077391] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4241.143285] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4241.146123] LustreError: Skipped 2 previous similar messages [ 4241.232933] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4241.234928] Lustre: Skipped 9 previous similar messages [ 4241.243700] Lustre: *** cfs_fail_loc=151, val=0*** [ 4241.245192] LustreError: 3459:0:(mdd_device.c:687:mdd_changelog_init()) lustre-MDD0000: changelog setup during init failed: rc = -5 [ 4241.248087] LustreError: 3459:0:(mdd_device.c:1402:mdd_prepare()) lustre-MDD0000: failed to initialize changelog: rc = -5 [ 4241.251044] LustreError: 3459:0:(tgt_mount.c:2223:server_fill_super()) Unable to start targets: -5 [ 4241.254764] Lustre: Failing over lustre-MDT0000 [ 4241.256468] LustreError: 3506:0:(llog_osd.c:247:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: can't read llog [0x24000040b:0x1:0x0] header: rc = -5 [ 4241.259933] Lustre: 3506:0:(llog_cat.c:809:llog_cat_process_common()) lustre-MDT0001-osp-MDT0000: can't find llog handle [0x24000040b:0x1:0x0]: rc = -5 [ 4241.265397] LustreError: 3506:0:(llog.c:805:llog_process_thread()) lustre-MDT0001-osp-MDT0000 retry remote llog process [ 4241.269681] LustreError: 3506:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 0, retries 0, failed: rc = -11 [ 4241.325514] Lustre: server umount lustre-MDT0000 complete [ 4241.328224] LustreError: 3459:0:(super25.c:189:lustre_fill_super()) llite: Unable to mount : rc = -5 [ 4244.008668] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4244.069563] LustreError: 4029:0:(ldlm_resource.c:1128:ldlm_resource_complain()) MGC192.168.203.154@tcp: namespace resource [0x65727473756c:0x0:0x0].0x0 (ffff8800b52c3100) refcount nonzero (1) after lock cleanup; forcing cleanup. [ 4244.074204] LustreError: 6930:0:(mgc_request.c:627:do_requeue()) failed processing log: -5 [ 4245.187435] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 4246.897818] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 4246.902146] Lustre: Skipped 10 previous similar messages [ 4249.178100] Lustre: lustre-MDT0000: Recovery over after 0:02, of 2 clients 2 recovered and 0 were evicted. [ 4249.182603] Lustre: Skipped 10 previous similar messages [ 4249.205019] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27461 to 0x280000401:27489) [ 4249.205029] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27460 to 0x2c0000401:27489) [ 4250.278710] Lustre: DEBUG MARKER: == recovery-small test 112a: bulk resend while orignal request is in progress ========================================================== 16:49:59 (1713300599) [ 4250.806162] LustreError: 18988:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 20000ms [ 4270.810709] LustreError: 18988:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 4275.581097] Lustre: DEBUG MARKER: == recovery-small test 115a: read: late REQ MDunlink and no bulk ========================================================== 16:50:24 (1713300624) [ 4281.359297] Lustre: mdt_out00_002: service thread pid 8084 was inactive for 40.103 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4281.367527] Pid: 8084, comm: mdt_out00_002 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4281.371321] Call Trace: [ 4281.372770] [<0>] target_bulk_io+0x5c6/0x8a0 [ptlrpc] [ 4281.375141] [<0>] tgt_send_buffer+0xeb/0x210 [ptlrpc] [ 4281.377525] [<0>] out_read+0x7f4/0xb40 [ptlrpc] [ 4281.379706] [<0>] out_handle+0x1969/0x2450 [ptlrpc] [ 4281.382052] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4281.384568] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4281.387431] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4281.389562] [<0>] kthread+0xe4/0xf0 [ 4281.391203] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4281.393474] [<0>] 0xfffffffffffffffe [ 4284.111035] Lustre: DEBUG MARKER: == recovery-small test 115b: write: late REQ MDunlink and no bulk ========================================================== 16:50:33 (1713300633) [ 4288.217425] Lustre: *** cfs_fail_loc=215, val=2*** [ 4288.219842] Lustre: Skipped 1 previous similar message [ 4292.673794] Lustre: DEBUG MARKER: == recovery-small test 115c: read: late Reply MDunlink and no bulk ========================================================== 16:50:41 (1713300641) [ 4298.441108] Lustre: DEBUG MARKER: == recovery-small test 115d: write: late Reply MDunlink and no bulk ========================================================== 16:50:47 (1713300647) [ 4304.346655] Lustre: DEBUG MARKER: == recovery-small test 115e: read: late Bulk MDunlink and no reply ========================================================== 16:50:53 (1713300653) [ 4310.238678] Lustre: DEBUG MARKER: == recovery-small test 115f: read: late REQ MDunlink and no reply ========================================================== 16:50:59 (1713300659) [ 4312.001896] LustreError: 9201:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a0493100 x1796521553034816/t0(0) o400->ad3dd33e-0ba3-4a5c-880f-d90f147ef83f@192.168.203.54@tcp:313/0 lens 224/224 e 0 to 0 dl 1713300673 ref 1 fl Interpret:H/200/0 rc 0/0 job:'kworker.0' uid:0 gid:0 [ 4312.013935] LustreError: 9201:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 5 previous similar messages [ 4318.733397] Lustre: DEBUG MARKER: == recovery-small test 115g: read: late REQ MDunlink and Reply MDunlink ========================================================== 16:51:07 (1713300667) [ 4341.258690] LustreError: 8084:0:(ldlm_lib.c:3576:target_bulk_io()) @@@ timeout on bulk WRITE after 100+1713296351s req@ffff8800776c7100 x1796521489904256/t0(0) o1000->lustre-MDT0000-mdtlov_UUID@0@lo:242/0 lens 336/33016 e 0 to 0 dl 1713300602 ref 1 fl Interpret:/200/0 rc 0/0 job:'lod0000_rec0001.0' uid:0 gid:0 [ 4341.271300] Lustre: 8084:0:(service.c:2359:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (11/89s); client may timeout req@ffff8800776c7100 x1796521489904256/t0(0) o1000->lustre-MDT0000-mdtlov_UUID@0@lo:242/0 lens 336/33016 e 0 to 0 dl 1713300602 ref 1 fl Complete:/200/0 rc -110/-110 job:'lod0000_rec0001.0' uid:0 gid:0 [ 4382.432641] Lustre: DEBUG MARKER: == recovery-small test 120: flock race: completion vs. evict ========================================================== 16:52:11 (1713300731) [ 4384.844572] Lustre: 11077:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4390.899030] Lustre: 11146:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4398.932019] Lustre: 11216:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4402.971230] Lustre: 11283:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4411.688482] Lustre: 11357:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4419.715856] Lustre: 11429:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4438.511887] Lustre: 11636:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4438.518261] Lustre: 11636:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 2 previous similar messages [ 4446.674009] Lustre: DEBUG MARKER: == recovery-small test 113: ldlm enqueue dropped reply should not cause deadlocks ========================================================== 16:53:15 (1713300795) [ 4507.136172] Lustre: lustre-MDT0000: Client ad3dd33e-0ba3-4a5c-880f-d90f147ef83f (at 192.168.203.54@tcp) reconnecting [ 4507.141273] Lustre: Skipped 5 previous similar messages [ 4516.099481] Lustre: DEBUG MARKER: == recovery-small test 130a: enqueue resend on not existing file ========================================================== 16:54:25 (1713300865) [ 4516.782532] LustreError: 8082:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4526.786715] LustreError: 8082:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4581.104059] Lustre: DEBUG MARKER: == recovery-small test 130b: enqueue resend on a stale inode ========================================================== 16:55:30 (1713300930) [ 4581.766066] LustreError: 16976:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4591.770738] LustreError: 16976:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4591.775309] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 4591.777809] Lustre: Skipped 7 previous similar messages [ 4641.781021] Lustre: *** cfs_fail_loc=217, val=0*** [ 4646.321219] Lustre: DEBUG MARKER: == recovery-small test 130c: layout intent resend on a stale inode ========================================================== 16:56:35 (1713300995) [ 4648.949824] LustreError: 16976:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4658.953703] LustreError: 16976:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4674.544958] Lustre: DEBUG MARKER: == recovery-small test 132: long punch =================== 16:57:03 (1713301023) [ 4675.175309] LustreError: 18988:0:(ofd_dev.c:2089:ofd_punch_hdl()) cfs_fail_timeout id 236 sleeping for 120000ms [ 4747.278672] Lustre: ll_ost_io00_003: service thread pid 18988 was inactive for 72.103 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4747.287466] Pid: 18988, comm: ll_ost_io00_003 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4747.291449] Call Trace: [ 4747.292780] [<0>] __cfs_fail_timeout_set+0xe9/0x210 [libcfs] [ 4747.295710] [<0>] ofd_punch_hdl+0xa8c/0xb40 [ofd] [ 4747.298125] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4747.300816] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4747.303707] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4747.306210] [<0>] kthread+0xe4/0xf0 [ 4747.307949] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4747.310516] [<0>] 0xfffffffffffffffe [ 4795.186686] LustreError: 18988:0:(ofd_dev.c:2089:ofd_punch_hdl()) cfs_fail_timeout id 236 awake [ 4800.158079] Lustre: DEBUG MARKER: == recovery-small test 131: IO vs evict results to IO under staled lock ========================================================== 16:59:09 (1713301149) [ 4802.151978] Lustre: 16643:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting ad3dd33e-0ba3-4a5c-880f-d90f147ef83f at adminstrative request [ 4802.158394] LustreError: 6922:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 4804.963682] LustreError: 6922:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout interrupted [ 4808.994978] Lustre: DEBUG MARKER: == recovery-small test 133: don't fail on flock resend === 16:59:17 (1713301157) [ 4853.083612] Lustre: DEBUG MARKER: == recovery-small test 134: race between failover and search for reply data free slot ========================================================== 17:00:02 (1713301202) [ 4853.641289] Lustre: DEBUG MARKER: SKIP: recovery-small test_134 Need 2+ clients, have 1 [ 4856.470339] Lustre: DEBUG MARKER: == recovery-small test 135: DOM: open/create resend to return size ========================================================== 17:00:05 (1713301205) [ 4857.075084] LustreError: 9223:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88012cf7ea00 x1796521553119616/t12884901906(0) o101->ad3dd33e-0ba3-4a5c-880f-d90f147ef83f@192.168.203.54@tcp:110/0 lens 648/720 e 0 to 0 dl 1713301225 ref 1 fl Interpret:/200/0 rc 301/0 job:'openfile.0' uid:0 gid:0 [ 4857.087792] LustreError: 9223:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 4 previous similar messages [ 4880.078216] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800843d5180 x1796521553119616/t12884901906(0) o101->ad3dd33e-0ba3-4a5c-880f-d90f147ef83f@192.168.203.54@tcp:133/0 lens 648/3488 e 0 to 0 dl 1713301248 ref 1 fl Interpret:/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 4880.090527] Lustre: 6931:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 4883.021446] Lustre: DEBUG MARKER: SKIP: recovery-small test_136 skipping excluded test 136 [ 4885.007011] Lustre: DEBUG MARKER: == recovery-small test 137: late resend must be skipped if already applied ========================================================== 17:00:33 (1713301233) [ 4886.504925] LustreError: 6931:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_race id 525 sleeping [ 4891.508684] LustreError: 6931:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 awake: rc=0 [ 4891.534527] LustreError: 6931:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 waking [ 4911.319441] Lustre: DEBUG MARKER: == recovery-small test 138: Umount MDT during recovery === 17:01:00 (1713301260) [ 4912.658804] Lustre: Failing over lustre-MDT0000 [ 4912.670484] LustreError: 20300:0:(lod_dev.c:1129:lod_process_config()) cfs_fail_timeout id 724 sleeping for 10000ms [ 4915.232422] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4915.233363] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4915.247326] Lustre: Skipped 14 previous similar messages [ 4915.248530] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 4915.248532] LustreError: Skipped 1 previous similar message [ 4920.240104] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4920.243807] Lustre: Skipped 6 previous similar messages [ 4922.674696] LustreError: 20300:0:(lod_dev.c:1129:lod_process_config()) cfs_fail_timeout id 724 awake [ 4922.775151] Lustre: server umount lustre-MDT0000 complete [ 4925.248136] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4925.255053] LustreError: Skipped 8 previous similar messages [ 4935.738346] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 4935.795179] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4935.800079] LustreError: Skipped 1 previous similar message [ 4935.897674] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4935.901603] Lustre: Skipped 1 previous similar message [ 4935.922080] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4935.924130] Lustre: Skipped 3 previous similar messages [ 4937.041012] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 4940.914733] LustreError: 20874:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 awake [ 4940.915836] Lustre: lustre-MDT0000-lwp-MDT0001: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 4940.915839] Lustre: Skipped 12 previous similar messages [ 4976.525689] LustreError: 20874:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 awake [ 4976.527414] LustreError: 20874:0:(lod_dev.c:475:lod_sub_recovery_thread()) Skipped 6 previous similar messages [ 4993.334853] Lustre: Failing over lustre-MDT0000 [ 4995.924708] Lustre: 20875:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4995.929970] Lustre: 20875:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 4996.000427] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 4996.004372] Lustre: Skipped 3 previous similar messages [ 4996.828700] LustreError: 20874:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 61, retries 11, failed: rc = -5 [ 4996.835918] Lustre: 20875:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 4996.862557] Lustre: 20875:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 5001.008383] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5001.011831] Lustre: Skipped 3 previous similar messages [ 5003.540927] Lustre: server umount lustre-MDT0000 complete [ 5006.886545] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5008.148175] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5008.961177] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 5008.965737] Lustre: lustre-MDT0000: Denying connection for new client 2b89361a-a71a-4393-b6d2-97176ad1d853 (at 192.168.203.54@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 5012.074642] Lustre: lustre-MDT0000: Recovery over after 0:03, of 1 clients 1 recovered and 0 were evicted. [ 5012.100375] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27502 to 0x2c0000401:27521) [ 5012.100946] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27504 to 0x280000401:27521) [ 5018.310059] Lustre: DEBUG MARKER: == recovery-small test 139: corrupted catid won't cause crash ========================================================== 17:02:47 (1713301367) [ 5018.904822] Lustre: Failing over lustre-MDT0000 [ 5018.990512] Lustre: server umount lustre-MDT0000 complete [ 5022.304531] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5022.444141] Lustre: *** cfs_fail_loc=2106, val=104*** [ 5022.445481] LustreError: 23958:0:(osp_sync.c:1415:osp_sync_llog_init()) lustre-OST0000-osc-MDT0000: the catid [0x0:0x68:0x0] for init llog 0 is bad [ 5023.590708] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5027.525329] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27502 to 0x2c0000401:27553) [ 5027.525353] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27504 to 0x280000401:27553) [ 5028.374484] Lustre: DEBUG MARKER: == recovery-small test 140a: local mount is flagged properly ========================================================== 17:02:57 (1713301377) [ 5029.720447] Lustre: lustre-MDT0000: local client d8cfe16c-8f71-49b2-8c1d-5c035f8b2c6a w/o recovery [ 5029.726293] Lustre: Skipped 1 previous similar message [ 5029.739383] Lustre: Mounted lustre-client [ 5030.489017] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5031.950716] Lustre: Unmounted lustre-client [ 5033.288317] Lustre: Mounted lustre-client [ 5034.059933] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5035.565751] Lustre: Unmounted lustre-client [ 5040.717894] Lustre: DEBUG MARKER: == recovery-small test 140b: local mount is excluded from recovery ========================================================== 17:03:09 (1713301389) [ 5042.071950] Lustre: lustre-MDT0000: local client 92fa1cd1-a38a-400e-b513-1f81aba141ef w/o recovery [ 5042.075904] Lustre: Skipped 1 previous similar message [ 5042.084787] Lustre: Mounted lustre-client [ 5042.868263] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5045.681894] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5046.832638] Lustre: Unmounted lustre-client [ 5047.875409] Lustre: Failing over lustre-MDT0000 [ 5047.969624] Lustre: server umount lustre-MDT0000 complete [ 5062.242154] LDISKFS-fs (dm-0): recovery complete [ 5062.244921] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5063.523328] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5067.454059] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27504 to 0x280000401:27585) [ 5067.454104] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27502 to 0x2c0000401:27585) [ 5068.266436] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5068.821728] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5075.272172] Lustre: DEBUG MARKER: == recovery-small test 141: do not lose locks on MGS restart ========================================================== 17:03:44 (1713301424) [ 5076.126969] Lustre: DEBUG MARKER: SKIP: recovery-small test_141 cannot run in local mode or from build tree [ 5078.911177] Lustre: DEBUG MARKER: == recovery-small test 142: orphan name stub can be cleaned up in startup ========================================================== 17:03:47 (1713301427) [ 5079.279784] Lustre: *** cfs_fail_loc=165, val=0*** [ 5079.950058] Lustre: Failing over lustre-MDT0000 [ 5080.038679] Lustre: server umount lustre-MDT0000 complete [ 5083.124023] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5084.434043] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5088.336622] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27502 to 0x2c0000401:27617) [ 5088.336635] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27587 to 0x280000401:27617) [ 5088.344122] LustreError: 323:0:(osd_handler.c:297:osd_idc_find_or_init()) can't lookup: rc = -2 [ 5090.012922] Lustre: DEBUG MARKER: == recovery-small test 143: orphan cleanup thread shouldn't be blocked even delete failed ========================================================== 17:03:58 (1713301438) [ 5090.712540] Lustre: Failing over lustre-MDT0000 [ 5090.802054] Lustre: server umount lustre-MDT0000 complete [ 5093.141496] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) [ 5096.323072] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5097.646761] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5099.175920] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5101.554236] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:27587 to 0x280000401:27649) [ 5101.554288] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:27502 to 0x2c0000401:27649) [ 5108.754412] Lustre: DEBUG MARKER: == recovery-small test 144a: MDT failover should stop precreation threads ========================================================== 17:04:17 (1713301457) [ 5110.760349] Lustre: Failing over lustre-OST0000 [ 5110.799544] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_create to node 0@lo failed: rc = -107 [ 5110.801639] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 5110.821251] Lustre: server umount lustre-OST0000 complete [ 5123.477776] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5123.483628] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5125.284919] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5127.967043] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5128.565528] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5190.601568] Lustre: Failing over lustre-MDT0000 [ 5190.953062] Lustre: server umount lustre-MDT0000 complete [ 5191.679525] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5203.902489] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5205.188128] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5209.088644] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:52650 to 0x2c0000401:52673) [ 5209.088653] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:52650 to 0x280000401:52673) [ 5209.945507] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5210.507414] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5212.539431] Lustre: Failing over lustre-MDT0000 [ 5212.626946] Lustre: server umount lustre-MDT0000 complete [ 5225.457135] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5226.750034] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5230.656531] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:52650 to 0x2c0000401:52705) [ 5230.656629] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:52650 to 0x280000401:52705) [ 5231.523351] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5232.113970] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5250.587765] Lustre: DEBUG MARKER: == recovery-small test 144b: orphan cleanup shouldn't be blocked for no objects+failover situation ========================================================== 17:06:39 (1713301599) [ 5252.935490] Lustre: Failing over lustre-OST0000 [ 5252.937678] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_create to node 0@lo failed: rc = -19 [ 5253.099400] Lustre: lustre-OST0000: Not available for connect from 192.168.203.54@tcp (stopping) [ 5253.101071] Lustre: Skipped 1 previous similar message [ 5253.270212] Lustre: server umount lustre-OST0000 complete [ 5266.099775] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5266.103405] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5267.802564] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5270.595945] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5271.220073] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5272.278402] LustreError: 21615:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x50:0x0]: have 662 want 1000 [ 5272.796064] LustreError: 8082:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x54:0x0]: have 662 want 1000 [ 5272.798639] LustreError: 8082:0:(lod_qos.c:1401:lod_ost_alloc_specific()) Skipped 3 previous similar messages [ 5273.873109] LustreError: 6931:0:(lod_qos.c:1401:lod_ost_alloc_specific()) can't lstripe objid [0x20000d6f1:0x5f:0x0]: have 662 want 1000 [ 5273.875354] LustreError: 6931:0:(lod_qos.c:1401:lod_ost_alloc_specific()) Skipped 10 previous similar messages [ 5343.355720] Lustre: DEBUG MARKER: == recovery-small test 144c: reconnection during orphan cleanup shouldn't lose LAST_ID synchronization ========================================================== 17:08:12 (1713301692) [ 5345.791632] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x280000401 to 0x2800013a0 [ 5379.161527] Lustre: Failing over lustre-MDT0000 [ 5379.572407] Lustre: lustre-MDT0000: Not available for connect from 192.168.203.54@tcp (stopping) [ 5380.959570] LustreError: 11-0: lustre-MDT0000-osp-MDT0001: operation mds_statfs to node 0@lo failed: rc = -107 [ 5381.701159] Lustre: server umount lustre-MDT0000 complete [ 5385.201118] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5386.487910] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5388.022611] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5390.414080] LustreError: 4720:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout id 254 sleeping for 5000ms [ 5390.417194] LustreError: 4720:0:(ofd_dev.c:1523:ofd_create_hdl()) Skipped 14 previous similar messages [ 5394.390773] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 5394.394508] Lustre: Skipped 5 previous similar messages [ 5394.813683] LustreError: 18803:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout interrupted [ 5394.817606] LustreError: 18803:0:(ofd_dev.c:1528:ofd_create_hdl()) lustre-OST0000: dropping old orphan cleanup request [ 5394.819683] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000401:65190 to 0x2c0000401:65536) [ 5394.828796] LustreError: 11269:0:(osp_precreate.c:992:osp_precreate_cleanup_orphans()) lustre-OST0000-osc-MDT0000: cannot cleanup orphans: rc = -116 [ 5394.922666] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x2c0000401 to 0x2c0000403 [ 5395.835268] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8687 to 0x2800013a0:8769) [ 5411.664371] Lustre: DEBUG MARKER: == recovery-small test 145: connect mdtlovs and process update logs after recovery expire ========================================================== 17:09:20 (1713301760) [ 5412.227013] Lustre: DEBUG MARKER: SKIP: recovery-small test_145 needs >= 3 MDTs [ 5415.012760] Lustre: DEBUG MARKER: == recovery-small test 146: test eviction is counted properly ========================================================== 17:09:24 (1713301764) [ 5415.680523] Lustre: 13347:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 2b89361a-a71a-4393-b6d2-97176ad1d853 at adminstrative request [ 5420.657473] Lustre: DEBUG MARKER: == recovery-small test 147: Check client reconnect ======= 17:09:29 (1713301769) [ 5421.427731] Lustre: *** cfs_fail_loc=225, val=0*** [ 5511.459478] Lustre: *** cfs_fail_loc=225, val=0*** [ 5511.461432] Lustre: Skipped 3 previous similar messages [ 5573.552667] Lustre: lustre-MDT0000: haven't heard from client lustre-MDT0000-lwp-OST0001_UUID (at 0@lo) in 48 seconds. I think it's dead, and I am evicting it. exp ffff8800a0412800, cur 1713301923 expire 1713301893 last 1713301875 [ 5573.565310] Lustre: Skipped 1 previous similar message [ 5575.840296] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5575.847609] Lustre: Skipped 36 previous similar messages [ 5575.852760] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.203.154@tcp (at 0@lo) [ 5575.858970] Lustre: Skipped 39 previous similar messages [ 5589.367273] Lustre: DEBUG MARKER: == recovery-small test 148: data corruption through resend ========================================================== 17:12:18 (1713301938) [ 5618.631723] LustreError: 18988:0:(tgt_handler.c:2880:tgt_brw_write()) cfs_fail_timeout id 227 awake [ 5618.636105] LustreError: 18988:0:(tgt_handler.c:2880:tgt_brw_write()) Skipped 5 previous similar messages [ 5626.142703] Lustre: DEBUG MARKER: == recovery-small test 149: skip orphan removal at umount ========================================================== 17:12:55 (1713301975) [ 5630.735684] LustreError: 11-0: lustre-MDT0001-osp-MDT0000: operation mds_statfs to node 0@lo failed: rc = -107 [ 5630.740807] Lustre: lustre-MDT0001: Not available for connect from 0@lo (stopping) [ 5630.745331] Lustre: Skipped 4 previous similar messages [ 5633.340630] Lustre: server umount lustre-MDT0001 complete [ 5635.935878] LustreError: 137-5: lustre-MDT0001: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5635.942837] LustreError: Skipped 79 previous similar messages [ 5637.415913] mdt_io00_002 (6946) used greatest stack depth: 10008 bytes left [ 5637.434139] Lustre: server umount lustre-MDT0000 complete [ 5640.431055] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5640.488847] LustreError: 166-1: MGC192.168.203.154@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5640.493969] LustreError: Skipped 8 previous similar messages [ 5640.627042] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5640.632414] Lustre: Skipped 10 previous similar messages [ 5640.648278] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8773 to 0x2800013a0:8801) [ 5640.653082] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:33) [ 5641.689445] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5644.702670] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5644.880991] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:2062 to 0x280000bd0:2625) [ 5644.881005] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:6056 to 0x2c0000402:6209) [ 5645.925213] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5657.414796] Lustre: DEBUG MARKER: == recovery-small test 150: statfs when MDT0 offline with lazystatfs option ========================================================== 17:13:26 (1713302006) [ 5658.137923] Lustre: Failing over lustre-MDT0000 [ 5658.225057] Lustre: server umount lustre-MDT0000 complete [ 5662.981919] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5663.166258] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5663.168825] Lustre: Skipped 12 previous similar messages [ 5664.292955] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5665.804247] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5666.610025] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 5666.616435] Lustre: Skipped 9 previous similar messages [ 5668.184942] Lustre: lustre-MDT0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 5668.191168] Lustre: Skipped 9 previous similar messages [ 5668.215459] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:65) [ 5668.215474] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8773 to 0x2800013a0:8833) [ 5674.901554] Lustre: DEBUG MARKER: == recovery-small test 152: QoS object allocation could be awakened in case of OST failover ========================================================== 17:13:43 (1713302023) [ 5676.113387] Lustre: DEBUG MARKER: SKIP: recovery-small test_152 MDS Linux kernel does not support killable semaphore [ 5678.790320] Lustre: DEBUG MARKER: == recovery-small test 153: evict vs reconnect race ====== 17:13:47 (1713302027) [ 5681.634159] Lustre: *** cfs_fail_loc=174, val=0*** [ 5681.636855] Lustre: Skipped 5 previous similar messages [ 5699.174746] Lustre: 3495:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713302033/real 1713302033] req@ffff880073430380 x1796521495258048/t0(0) o400->lustre-MDT0000-osp-MDT0001@0@lo:24/4 lens 224/224 e 0 to 1 dl 1713302049 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5699.242855] Lustre: lustre-MDT0000: Received new LWP connection from 0@lo, keep former export from same NID [ 5699.247111] Lustre: *** cfs_fail_loc=174, val=0*** [ 5699.249177] Lustre: Skipped 2 previous similar messages [ 5702.786313] Lustre: Failing over lustre-MDT0000 [ 5702.883480] Lustre: server umount lustre-MDT0000 complete [ 5706.255028] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5707.514441] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5709.024257] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5709.247724] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713302043/real 1713302043] req@ffff88009fcaf800 x1796521495260544/t0(0) o400->lustre-MDT0000-lwp-MDT0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713302059 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5709.260377] Lustre: 3494:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 5711.472171] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:97) [ 5711.472208] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8773 to 0x2800013a0:8865) [ 5718.579602] Lustre: DEBUG MARKER: == recovery-small test 154a: corruption update llog can be skipped ========================================================== 17:14:27 (1713302067) [ 5719.287388] Lustre: Failing over lustre-MDT0001 [ 5719.378950] Lustre: server umount lustre-MDT0001 complete [ 5721.706603] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) [ 5725.114015] LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5726.339827] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5727.694298] Lustre: Failing over lustre-MDT0000 [ 5727.769876] Lustre: server umount lustre-MDT0000 complete [ 5730.881874] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5732.187712] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5733.648699] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 20 [ 5736.061606] LustreError: 26273:0:(llog_osd.c:268:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: bad log [0x240000409:0x1:0x0] header magic: 0x3a0d2312 (expected 0x10645539) [ 5736.068936] Lustre: 26273:0:(lod_sub_object.c:981:lod_sub_prep_llog()) lustre-MDT0000-mdtlov: renew invalid update log [0x240000409:0x1:0x0]: rc = -22 [ 5736.075935] Lustre: lustre-OST0001: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x2c0000402:6056 to 0x2c0000402:6241) [ 5736.076804] Lustre: lustre-OST0000: new connection from lustre-MDT0001-mdtlov (cleaning up unused objects from 0x280000bd0:2062 to 0x280000bd0:2657) [ 5736.113992] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8773 to 0x2800013a0:8897) [ 5736.114533] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:129) [ 5743.619813] Lustre: DEBUG MARKER: == recovery-small test 154b: restore update llog after failed recovery ========================================================== 17:14:52 (1713302092) [ 5744.318248] Lustre: Failing over lustre-MDT0000 [ 5744.413225] Lustre: server umount lustre-MDT0000 complete [ 5747.833327] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5748.009964] LustreError: 28364:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 sleeping for 5000ms [ 5748.015786] LustreError: 28364:0:(lod_dev.c:475:lod_sub_recovery_thread()) Skipped 1 previous similar message [ 5748.020787] Lustre: lustre-MDT0000: Aborting client recovery [ 5748.023099] LustreError: 28335:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 5748.027119] Lustre: 28365:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5748.033051] Lustre: 28365:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 1 previous similar message [ 5753.019678] LustreError: 28364:0:(lod_dev.c:475:lod_sub_recovery_thread()) cfs_fail_timeout id 724 awake [ 5753.023831] LustreError: 28364:0:(lod_dev.c:525:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 5, retries 0, failed: rc = -5 [ 5753.029816] Lustre: 28365:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 2b89361a-a71a-4393-b6d2-97176ad1d853@ [ 5753.036793] Lustre: lustre-MDT0000: disconnecting 2 stale clients [ 5753.040985] Lustre: 28365:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 5753.050746] Lustre: lustre-MDT0000-osd: cancel update llog [0x200009870:0x1:0x0] [ 5753.088879] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8773 to 0x2800013a0:8929) [ 5753.088882] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:3 to 0x2c0000403:161) [ 5754.252483] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5755.815821] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 30 [ 5763.516558] Lustre: DEBUG MARKER: == recovery-small test 155: failover after client remount ========================================================== 17:15:12 (1713302112) [ 5766.914794] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5767.642854] Lustre: Failing over lustre-MDT0000 [ 5767.735599] Lustre: server umount lustre-MDT0000 complete [ 5782.014800] LDISKFS-fs (dm-0): recovery complete [ 5782.016865] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc [ 5783.349116] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5784.578025] Lustre: lustre-MDT0000: Denying connection for new client 81d849c0-283b-4580-a2ee-5a01b818a6d8 (at 192.168.203.54@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:59 [ 5785.969169] Lustre: lustre-MDT0000: Denying connection for new client 81d849c0-283b-4580-a2ee-5a01b818a6d8 (at 192.168.203.54@tcp), waiting for 1 known clients (0 recovered, 0 in progress, and 0 evicted) to recover in 0:58 [ 5787.213745] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2800013a0:8773 to 0x2800013a0:8961) [ 5787.213772] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2c0000403:163 to 0x2c0000403:193) [ 5793.772041] Lustre: DEBUG MARKER: == recovery-small test 156: tot_granted miscount after client eviction ========================================================== 17:15:42 (1713302142) [ 5794.445375] Lustre: Setting parameter general.timeout in log params [ 5797.583115] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 5798.506697] Lustre: Failing over lustre-OST0000 [ 5798.702686] Lustre: server umount lustre-OST0000 complete [ 5812.745424] LDISKFS-fs (dm-2): file extents enabled, maximum tree depth=5 [ 5812.907019] LDISKFS-fs (dm-2): recovery complete [ 5812.908169] LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. Opts: user_xattr,acl,no_mbcache,nodelalloc [ 5814.694449] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing set_default_debug -1 all [ 5853.559767] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 5853.563034] Lustre: 1455:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client 81d849c0-283b-4580-a2ee-5a01b818a6d8@192.168.203.54@tcp [ 5853.569542] Lustre: 1455:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 5853.574060] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 5853.577466] Lustre: 1455:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-OST0000: extended recovery timer reached hard limit: 45, extend: 1 [ 5853.585801] Lustre: 1455:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 5853.590403] LustreError: dumping log to /tmp/lustre-log.1713302203.1455 [ 5859.491239] Lustre: DEBUG MARKER: oleg354-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5860.072961] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5864.065137] Lustre: Modifying parameter general.timeout in log params [ 5866.902945] Lustre: DEBUG MARKER: == recovery-small test 157: eviction during mmaped i/o === 17:16:55 (1713302215) [ 5868.364701] Lustre: 3036:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 81d849c0-283b-4580-a2ee-5a01b818a6d8 at adminstrative request [ 5868.371790] Lustre: 3036:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 5873.126972] Lustre: DEBUG MARKER: == recovery-small test complete, duration 5773 sec ======= 17:17:02 (1713302222) [ 5960.498367] Lustre: server umount lustre-MDT0000 complete [ 5963.561081] LustreError: 6920:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713302313 with bad export cookie 6968476881349272906 [ 5963.567431] LustreError: 6920:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 4 previous similar messages [ 5963.725129] Lustre: server umount lustre-MDT0001 complete [ 5976.898014] Lustre: server umount lustre-OST0000 complete [ 5990.003189] Lustre: server umount lustre-OST0001 complete [ 5992.300519] device-mapper: core: cleaned up [ 5995.266109] Lustre: DEBUG MARKER: oleg354-server.virtnet: executing unload_modules_local [ 5996.041926] Key type lgssc unregistered [ 5996.131328] LNet: 6324:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5996.136560] LNet: Removed LNI 192.168.203.154@tcp