[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffcdfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffce000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffce max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f5b30-0x000f5b3f] mapped at [ffffffffff200b30] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0xbc2e2000-0xbffbffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5950 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1bb7 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1a53 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01A13 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1ac7 00090 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1b57 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1b8f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 340659026 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffcdfff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffce000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027487 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820268k/5224448k available (8172k kernel code, 1049168k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.430847] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.433559] pid_max: default: 32768 minimum: 301 [ 0.435056] Security Framework initialized [ 0.436436] SELinux: Initializing. [ 0.439506] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.443696] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.446578] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.448777] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.451957] Initializing cgroup subsys memory [ 0.453015] Initializing cgroup subsys devices [ 0.454517] Initializing cgroup subsys freezer [ 0.455783] Initializing cgroup subsys net_cls [ 0.456916] Initializing cgroup subsys blkio [ 0.458182] Initializing cgroup subsys perf_event [ 0.459879] Initializing cgroup subsys hugetlb [ 0.461232] Initializing cgroup subsys pids [ 0.462621] Initializing cgroup subsys net_prio [ 0.464278] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.467255] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.468876] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.470596] tlb_flushall_shift: 6 [ 0.471685] FEATURE SPEC_CTRL Present [ 0.472806] FEATURE IBPB_SUPPORT Present [ 0.474235] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.475966] Spectre V2 : Vulnerable [ 0.477259] Speculative Store Bypass: Vulnerable [ 0.479480] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.487590] ACPI: Core revision 20130517 [ 0.490681] ACPI: All ACPI Tables successfully acquired [ 0.492647] ftrace: allocating 30294 entries in 119 pages [ 0.550945] Enabling x2apic [ 0.552284] Enabled x2apic [ 0.553378] Switched APIC routing to physical x2apic. [ 0.557252] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.559563] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.562866] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.565421] ... version: 2 [ 0.566569] ... bit width: 48 [ 0.567799] ... generic registers: 4 [ 0.569138] ... value mask: 0000ffffffffffff [ 0.570903] ... max period: 00007fffffffffff [ 0.572468] ... fixed-purpose events: 3 [ 0.573881] ... event mask: 000000070000000f [ 0.575646] KVM setup paravirtual spinlock [ 0.579193] smpboot: Booting Node 0, Processors #1[ 0.581049] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.585534] KVM setup async PF for cpu 1 [ 0.586948] kvm-stealtime: cpu 1, msr 13e2935c0 #2[ 0.591711] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.594620] KVM setup async PF for cpu 2 [ 0.595235] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock #3 OK [ 0.599395] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.602324] KVM setup async PF for cpu 3 [ 0.603295] Brought up 4 CPUs [ 0.603297] smpboot: Max logical packages: 1 [ 0.603299] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.609115] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.612537] devtmpfs: initialized [ 0.614091] x86/mm: Memory block size: 128MB [ 0.618984] EVM: security.selinux [ 0.620154] EVM: security.ima [ 0.621189] EVM: security.capability [ 0.625032] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.627751] NET: Registered protocol family 16 [ 0.629706] cpuidle: using governor haltpoll [ 0.632036] ACPI: bus type PCI registered [ 0.633385] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.635638] PCI: Using configuration type 1 for base access [ 0.637675] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.647984] ACPI: Added _OSI(Module Device) [ 0.650253] ACPI: Added _OSI(Processor Device) [ 0.652780] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.655054] ACPI: Added _OSI(Processor Aggregator Device) [ 0.656855] ACPI: Added _OSI(Linux-Dell-Video) [ 0.662281] ACPI: Interpreter enabled [ 0.664106] ACPI: (supports S0 S3 S4 S5) [ 0.665214] ACPI: Using IOAPIC for interrupt routing [ 0.667250] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.670782] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.679437] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.682296] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.685255] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.687935] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.692965] acpiphp: Slot [2] registered [ 0.694728] acpiphp: Slot [3] registered [ 0.701100] acpiphp: Slot [4] registered [ 0.704863] acpiphp: Slot [5] registered [ 0.707704] acpiphp: Slot [6] registered [ 0.709594] acpiphp: Slot [7] registered [ 0.713755] acpiphp: Slot [8] registered [ 0.717681] acpiphp: Slot [9] registered [ 0.719376] acpiphp: Slot [10] registered [ 0.721898] acpiphp: Slot [11] registered [ 0.723217] acpiphp: Slot [12] registered [ 0.725811] acpiphp: Slot [13] registered [ 0.728236] acpiphp: Slot [14] registered [ 0.730157] acpiphp: Slot [15] registered [ 0.731425] acpiphp: Slot [16] registered [ 0.734217] acpiphp: Slot [17] registered [ 0.736098] acpiphp: Slot [18] registered [ 0.737372] acpiphp: Slot [19] registered [ 0.739493] acpiphp: Slot [20] registered [ 0.741487] acpiphp: Slot [21] registered [ 0.742960] acpiphp: Slot [22] registered [ 0.744370] acpiphp: Slot [23] registered [ 0.745854] acpiphp: Slot [24] registered [ 0.747260] acpiphp: Slot [25] registered [ 0.749896] acpiphp: Slot [26] registered [ 0.752552] acpiphp: Slot [27] registered [ 0.754174] acpiphp: Slot [28] registered [ 0.756041] acpiphp: Slot [29] registered [ 0.758241] acpiphp: Slot [30] registered [ 0.760007] acpiphp: Slot [31] registered [ 0.761975] PCI host bridge to bus 0000:00 [ 0.764189] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.767324] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.769786] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.772065] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.777995] pci_bus 0000:00: root bus resource [mem 0x140000000-0x1bfffffff window] [ 0.781136] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.805336] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.809256] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.813414] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.816475] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.822062] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.824393] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 1.047069] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 1.049294] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 1.052669] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 1.055604] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 1.058084] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 1.061737] vgaarb: loaded [ 1.062996] SCSI subsystem initialized [ 1.064960] ACPI: bus type USB registered [ 1.067627] usbcore: registered new interface driver usbfs [ 1.071357] usbcore: registered new interface driver hub [ 1.074147] usbcore: registered new device driver usb [ 1.079960] PCI: Using ACPI for IRQ routing [ 1.082193] NetLabel: Initializing [ 1.083517] NetLabel: domain hash size = 128 [ 1.085108] NetLabel: protocols = UNLABELED CIPSOv4 [ 1.086697] NetLabel: unlabeled traffic allowed by default [ 1.089927] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 1.091534] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 1.096975] amd_nb: Cannot enumerate AMD northbridges [ 1.099564] Switched to clocksource kvm-clock [ 1.126195] pnp: PnP ACPI init [ 1.127499] ACPI: bus type PNP registered [ 1.130241] pnp: PnP ACPI: found 6 devices [ 1.131868] ACPI: bus type PNP unregistered [ 1.148801] NET: Registered protocol family 2 [ 1.151277] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 1.154696] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 1.158015] TCP: Hash tables configured (established 32768 bind 32768) [ 1.161048] TCP: reno registered [ 1.162498] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 1.164852] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 1.168138] NET: Registered protocol family 1 [ 1.171715] RPC: Registered named UNIX socket transport module. [ 1.173773] RPC: Registered udp transport module. [ 1.175202] RPC: Registered tcp transport module. [ 1.176997] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 1.179357] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 1.181510] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 1.183524] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 1.187565] Unpacking initramfs... [ 3.788050] debug: unmapping init [mem 0xffff8800bc2e2000-0xffff8800bffbffff] [ 3.797334] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 3.801985] software IO TLB [mem 0xb82e2000-0xbc2e2000] (64MB) mapped at [ffff8800b82e2000-ffff8800bc2e1fff] [ 3.807420] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 3.813791] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 3.818131] RAPL PMU: hw unit of domain package 2^-0 Joules [ 3.821615] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 3.831336] cryptomgr_test (52) used greatest stack depth: 14480 bytes left [ 3.832665] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 3.832714] Initialise system trusted keyring [ 3.886743] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 3.893905] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 3.908399] zpool: loaded [ 3.909632] zbud: loaded [ 3.911286] VFS: Disk quotas dquot_6.6.0 [ 3.912712] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 3.919550] NFS: Registering the id_resolver key type [ 3.923227] Key type id_resolver registered [ 3.927947] Key type id_legacy registered [ 3.929370] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 3.933535] Key type big_key registered [ 3.943999] cryptomgr_test (58) used greatest stack depth: 14048 bytes left [ 3.954216] cryptomgr_test (60) used greatest stack depth: 13664 bytes left [ 3.959678] NET: Registered protocol family 38 [ 3.964512] Key type asymmetric registered [ 3.969528] Asymmetric key parser 'x509' registered [ 3.973983] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 3.982639] io scheduler noop registered [ 3.985875] io scheduler deadline registered (default) [ 3.989996] io scheduler cfq registered [ 3.991633] io scheduler mq-deadline registered [ 3.993874] io scheduler kyber registered [ 4.001250] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 4.003111] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 4.008195] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 4.015501] ACPI: Power Button [PWRF] [ 4.019954] GHES: HEST is not enabled! [ 4.128953] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 4.219235] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 4.481669] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 4.640267] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 4.799036] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 4.827411] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 4.829538] tsc: Refined TSC clocksource calibration: 2399.954 MHz [ 4.859517] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 4.864047] Non-volatile memory driver v1.3 [ 4.865419] Linux agpgart interface v0.103 [ 4.867185] crash memory driver: version 1.1 [ 4.869170] nbd: registered device at major 43 [ 4.884038] virtio_blk virtio1: [vda] 67344 512-byte logical blocks (34.4 MB/32.8 MiB) [ 4.910743] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 4.926062] virtio_blk virtio3: [vdc] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 4.942939] virtio_blk virtio4: [vdd] 5120000 512-byte logical blocks (2.62 GB/2.44 GiB) [ 4.963444] virtio_blk virtio5: [vde] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 4.993828] virtio_blk virtio6: [vdf] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB) [ 5.004758] rdac: device handler registered [ 5.007265] hp_sw: device handler registered [ 5.009406] emc: device handler registered [ 5.011200] libphy: Fixed MDIO Bus: probed [ 5.020798] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 5.022926] ehci-pci: EHCI PCI platform driver [ 5.024735] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 5.026683] ohci-pci: OHCI PCI platform driver [ 5.028227] uhci_hcd: USB Universal Host Controller Interface driver [ 5.030755] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 5.034604] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 5.036208] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 5.039279] mousedev: PS/2 mouse device common for all mice [ 5.043800] rtc_cmos 00:05: RTC can wake from S4 [ 5.046057] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 5.046140] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 5.046470] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 5.058722] hidraw: raw HID events driver (C) Jiri Kosina [ 5.059005] usbcore: registered new interface driver usbhid [ 5.059006] usbhid: USB HID core driver [ 5.059075] drop_monitor: Initializing network drop monitor service [ 5.059207] Netfilter messages via NETLINK v0.30. [ 5.059287] TCP: cubic registered [ 5.059294] Initializing XFRM netlink socket [ 5.059707] NET: Registered protocol family 10 [ 5.060766] NET: Registered protocol family 17 [ 5.060808] Key type dns_resolver registered [ 5.062840] mce: Using 10 MCE banks [ 5.063543] Loading compiled-in X.509 certificates [ 5.064631] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 5.064668] registered taskstats version 1 [ 5.067364] modprobe (72) used greatest stack depth: 13456 bytes left [ 5.074289] Key type trusted registered [ 5.081694] Key type encrypted registered [ 5.081750] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 5.088309] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 5.089872] rtc_cmos 00:05: setting system clock to 2024-04-18 08:38:47 UTC (1713429527) [ 5.103923] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 5.108615] Write protecting the kernel read-only data: 12288k [ 5.111310] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 5.113924] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 5.123065] random: systemd: uninitialized urandom read (16 bytes read) [ 5.128149] random: systemd: uninitialized urandom read (16 bytes read) [ 5.130490] random: systemd: uninitialized urandom read (16 bytes read) [ 5.135109] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 5.141842] systemd[1]: Detected virtualization kvm. [ 5.143645] systemd[1]: Detected architecture x86-64. [ 5.145358] systemd[1]: Running in initial RAM disk. Welcome to CentOS Linux 7 (Core) dracut-033-572.el7 (Initramfs)! [ 5.151190] systemd[1]: No hostname configured. [ 5.152554] systemd[1]: Set hostname to . [ 5.154566] random: systemd: uninitialized urandom read (16 bytes read) [ 5.156947] systemd[1]: Initializing machine ID from random generator. [ 5.243312] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 5.250030] random: systemd: uninitialized urandom read (16 bytes read) [ 5.253414] random: systemd: uninitialized urandom read (16 bytes read) [ 5.255428] random: systemd: uninitialized urandom read (16 bytes read) [ 5.257430] random: systemd: uninitialized urandom read (16 bytes read) [ 5.262120] random: systemd: uninitialized urandom read (16 bytes read) [ 5.265312] random: systemd: uninitialized urandom read (16 bytes read) [ 5.278382] systemd[1]: Reached target Timers. [ OK ] Reached target Timers. [ 5.288004] systemd[1]: Reached target Swap. [ OK ] Reached target Swap. [ 5.296775] systemd[1]: Created slice Root Slice. [ OK ] Created slice Root Slice. [ 5.302836] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket. [ 5.313109] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket. [ 5.319409] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket. [ 5.324669] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems. [ 5.335173] systemd[1]: Reached target Sockets. [ OK ] Reached target Sockets. [ 5.339042] systemd[1]: Created slice System Slice. [ OK ] Created slice System Slice. [ 5.358192] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console... [ 5.370935] systemd[1]: Starting Journal Service... Starting Journal Service... [ 5.375963] systemd[1]: Starting Create list of required static device nodes for the current kernel... Starting Create list of required st... nodes for the current kernel... [ 5.383205] systemd[1]: Reached target Slices. [ OK ] Reached target Slices. [ 5.390351] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook... [ 5.402867] systemd[1]: Starting Load Kernel Modules... Starting Load Kernel Modules... [ 5.413834] systemd[1]: Started Setup Virtual Console. [ OK ] Started Setup Virtual Console. [ 5.433254] systemd[1]: Started Create list of required static device nodes for the current kernel. [ OK ] Started Create list of required sta...ce nodes for the current kernel. [ 5.447029] systemd[1]: Started Load Kernel Modules. [ OK ] Started Load Kernel Modules. [ 5.451928] systemd[1]: Started Journal Service. [ OK ] Started Journal Service. Starting Apply Kernel Variables... Starting Create Static Device Nodes in /dev... [ OK ] Started Apply Kernel Variables. [ OK ] Started Create Static Device Nodes in /dev. [ 5.779516] dracut-cmdline (105) used greatest stack depth: 13200 bytes left [ OK ] Started dracut cmdline hook. Starting dracut pre-udev hook... [ 6.046419] random: fast init done [ 6.058721] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ OK ] Started dracut pre-udev hook. Starting udev Kernel Device Manager... [ OK ] Started udev Kernel Device Manager. Starting dracut pre-trigger hook... [ OK ] Started dracut pre-trigger hook. Starting udev Coldplug all Devices... Mounting Configuration File System... [ OK ] Started udev Coldplug all Devices. [ OK ] Mounted Configuration File System. Starting dracut initqueue hook... Starting Show Plymouth Boot Screen... [ OK ] Reached target System Initialization. [ 6.592282] scsi host0: ata_piix [ 6.593824] scsi host1: ata_piix [ 6.610576] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc320 irq 14 [ 6.615915] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc328 irq 15 [ OK ] Started Show Plymouth Boot Screen. [ OK ] Reached target Paths. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [ OK ] Reached target Basic System. %G[ 6.900175] ip (323) used greatest stack depth: 13080 bytes left [ 7.024109] ip (346) used greatest stack depth: 12336 bytes left [ 8.685992] dracut-initqueue[278]: RTNETLINK answers: File exists [ 9.618868] dracut-initqueue[278]: bs=4096, sz=32212254720 bytes [ OK ] Started dracut initqueue hook. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Mounting /sysroot... [ OK ] Reached target Initrd Root File System. Starting Reload Configuration from the Real Root... [ 13.524775] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ OK ] Mounted /sysroot. [ OK ] Started Reload Configuration from the Real Root. [ OK ] Reached target Initrd File Systems. [ OK ] Reached target Initrd Default Target. Starting dracut pre-pivot and cleanup hook... [ OK ] Started dracut pre-pivot and cleanup hook. Starting Cleaning Up and Shutting Down Daemons... Starting Plymouth switch root service... [ OK ] Stopped target Timers. [ OK ] Stopped dracut pre-pivot and cleanup hook. [ OK ] Stopped target Initrd Default Target. [ OK ] Stopped target Basic System. [ OK ] Stopped target Sockets. [ OK ] Stopped target Paths. [ OK ] Stopped target Slices. [ OK ] Stopped target Remote File Systems. [ OK ] Stopped target System Initialization. [ OK ] Stopped Apply Kernel Variables. [ OK ] Stopped target Local File Systems. [ OK ] Stopped target Swap. [ OK ] Stopped Load Kernel Modules. [ OK ] Stopped target Remote File Systems (Pre). [ OK ] Stopped dracut initqueue hook. [ OK ] Stopped udev Coldplug all Devices. [ OK ] Stopped dracut pre-trigger hook. Stopping udev Kernel Device Manager... [ OK ] Stopped udev Kernel Device Manager. [ OK ] Started Cleaning Up and Shutting Down Daemons. [ OK ] Stopped dracut pre-udev hook. [ OK ] Stopped dracut cmdline hook. [ OK ] Stopped Create Static Device Nodes in /dev. [ OK ] Stopped Create list of required sta...ce nodes for the current kernel. [ OK ] Closed udev Control Socket. [ OK ] Closed udev Kernel Socket. Starting Cleanup udevd DB... [ OK ] Started Cleanup udevd DB. [ OK ] Reached target Switch Root. [ OK ] Started Plymouth switch root service. Starting Switch Root... [ 14.343608] systemd-journald[102]: Received SIGTERM from PID 1 (systemd). [ 14.872973] SELinux: Disabled at runtime. [ 15.017567] ip_tables: (C) 2000-2006 Netfilter Core Team [ 15.023003] systemd[1]: Inserted module 'ip_tables' Welcome to CentOS Linux 7 (Core)! [ OK ] Stopped Switch Root. [ OK ] Stopped Journal Service. Starting Journal Service... Starting Create list of required st... nodes for the current kernel... Mounting Debug File System... [ OK ] Created slice User and Session Slice. Mounting POSIX Message Queue File System... [ OK ] Reached target rpc_pipefs.target. [ OK ] Reached target Slices. [ OK ] Stopped target Switch Root. [ OK ] Stopped target Initrd File Systems. [ OK ] Listening on udev Kernel Socket. [ OK ] Set up automount Arbitrary Executab...ats File System Automount Point. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Created slice system-selinux\x2dpol...grate\x2dlocal\x2dchanges.slice. [ OK ] Stopped target Initrd Root File System. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Created slice system-getty.slice. [ OK ] Listening on udev Control Socket. Starting udev Coldplug all Devices... [ OK ] Created slice system-serial\x2dgetty.slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. Mounting Huge Pages File System... [ OK ] Started Forward Password Requests to Wall Directory Watch. Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Remount Root and Kernel File Systems... Starting Set Up Additional Binary Formats... Starting Load Kernel Modules... [ OK ] Started Create list of required sta...ce nodes for the current kernel. Mounting Arbitrary Executable File Formats File System... Starting Create Static Device Nodes in /dev... [ OK ] Mounted Huge Pages File System. [ OK ] Mounted POSIX Message Queue File System. [ OK ] Mounted Debug File System. [ OK ] Started Journal Service. [ OK ] Started Load Kernel Modules. [ OK ] Started udev Coldplug all Devices. Starting Apply Kernel Variables... [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. [ OK ] Mounted Arbitrary Executable File Formats File System. [ OK ] Started Apply Kernel Variables. [ OK ] Started Create Static Device Nodes in /dev. Starting udev Kernel Device Manager... [ OK ] Started Set Up Additional Binary Formats. [FAILED] Failed to start Remount Root and Kernel File Systems. See 'systemctl status systemd-remount-fs.service' for details. Starting Flush Journal to Persistent Storage... [ OK ] Reached target Local File Systems (Pre). Mounting /mnt... Starting Configure read-only root support... [ OK ] Mounted /mnt. [ 16.325887] systemd-journald[570]: Received request to flush runtime journal from PID 1 [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started udev Kernel Device Manager. [ 16.710637] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ 16.791671] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ OK ] Found device /dev/ttyS1. [ OK ] Found device /dev/ttyS0. [ 16.904044] cryptd: max_cpu_qlen set to 1000 [ OK ] Found device /dev/vda. Mounting /home/green/git/lustre-release... [ OK ] Found device /dev/disk/by-label/SWAP. Activating swap /dev/disk/by-label/SWAP... [ 17.061392] AVX version of gcm_enc/dec engaged. [ 17.065191] AES CTR mode by8 optimization enabled [ 17.094235] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ OK ] Activated swap /dev/disk/by-label/SWAP. [ OK ] Reached target Swap. [ 17.141123] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ OK ] Mounted /home/green/git/lustre-release. %G[ 17.185382] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 17.195203] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ 17.500780] EDAC MC: Ver: 3.0.0 [ 17.530617] EDAC sbridge: Ver: 1.1.2 [ 21.169538] mount.nfs (775) used greatest stack depth: 10416 bytes left [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Tell Plymouth To Write Out Runtime Data... Starting Preprocess NFS configuration... Starting Mark the need to relabel after reboot... Starting Rebuild Journal Catalog... Starting Load/Save Random Seed... Starting Create Volatile Files and Directories... [ OK ] Started Mark the need to relabel after reboot. [FAILED] Failed to start Create Volatile Files and Directories. See 'systemctl status systemd-tmpfiles-setup.service' for details. Starting Update UTMP about System Boot/Shutdown... [ OK ] Started Tell Plymouth To Write Out Runtime Data. [ OK ] Started Load/Save Random Seed. [ OK ] Started Preprocess NFS configuration. [FAILED] Failed to start Rebuild Journal Catalog. See 'systemctl status systemd-journal-catalog-update.service' for details. [ OK ] Started Update UTMP about System Boot/Shutdown. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Reached target Sockets. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Reached target Basic System. [ OK ] Started D-Bus System Message Bus. Starting GSSAPI Proxy Daemon... Starting Dump dmesg to /var/log/dmesg... Starting Network Manager... Starting Login Service... [ OK ] Started Dump dmesg to /var/log/dmesg. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Network Manager. Starting Network Manager Wait Online... [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... Starting Hostname Service... [ OK ] Started OpenSSH server daemon. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Hostname Service. Starting Network Manager Script Dispatcher Service... Starting Wait for Plymouth Boot Screen to Quit... Starting Terminate Plymouth Boot Screen... [ OK ] Started Network Manager Script Dispatcher Service. CentOS Linux 7 (Core) Kernel 3.10.0-7.9-debug on an x86_64 oleg404-server login: [ 34.692795] libcfs: loading out-of-tree module taints kernel. [ 34.695216] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 34.731035] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_hostid [ 39.898770] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing load_modules_local [ 40.077492] alg: No test for adler32 (adler32-zlib) [ 40.828571] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 1 [ 40.961727] Lustre: Lustre: Build Version: 2.15.62_22_gf2868d1 [ 41.135158] LNet: Added LNI 192.168.204.104@tcp [8/256/0/180] [ 41.136930] LNet: Accept secure, port 988 [ 42.685652] Key type lgssc registered [ 43.022987] Lustre: Echo OBD driver; http://www.lustre.org/ [ 43.508853] icp: module license 'CDDL' taints kernel. [ 43.510475] Disabling lock debugging due to kernel taint [ 46.189059] ZFS: Loaded module v0.8.6-1, ZFS pool version 5000, ZFS filesystem version 5 [ 48.876767] vdc: vdc1 vdc9 [ 53.024327] vde: vde1 vde9 [ 57.131531] vdf: vdf1 vdf9 [ 63.465578] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing load_modules_local [ 66.073760] Lustre: lustre-MDT0000: mounting server target with '-t lustre' deprecated, use '-t lustre_tgt' [ 67.226436] Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000 [ 67.319470] Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space. [ 67.359331] Lustre: lustre-MDT0000: new disk, initializing [ 67.524758] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 67.552937] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400]:0:mdt [ 67.646378] mount.lustre (6618) used greatest stack depth: 10144 bytes left [ 68.735054] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 72.734891] random: crng init done [ 73.104111] Lustre: lustre-OST0000: new disk, initializing [ 73.107532] Lustre: srv-lustre-OST0000: No data found on store. Initialize space. [ 73.111069] Lustre: Skipped 1 previous similar message [ 73.143634] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 74.500411] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000240000400-0x0000000280000400]:0:ost [ 74.503046] Lustre: cli-lustre-OST0000-super: Allocated super-sequence [0x0000000240000400-0x0000000280000400]:0:ost] [ 74.550014] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x100000000 to 0x240000400 [ 74.897972] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 78.758061] Lustre: lustre-OST0001: new disk, initializing [ 78.760634] Lustre: srv-lustre-OST0001: No data found on store. Initialize space. [ 78.785829] Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180 [ 80.433063] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 82.869322] Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000280000400-0x00000002c0000400]:1:ost [ 82.871672] Lustre: cli-lustre-OST0001-super: Allocated super-sequence [0x0000000280000400-0x00000002c0000400]:1:ost] [ 82.908571] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x100010000 to 0x280000400 [ 85.839159] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 89.174129] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 94.998015] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing check_logdir /tmp/testlogs/ [ 96.119391] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing yml_node [ 97.487278] Lustre: DEBUG MARKER: Client: 2.15.62.22 [ 98.365996] Lustre: DEBUG MARKER: MDS: 2.15.62.22 [ 99.909350] Lustre: DEBUG MARKER: OSS: 2.15.62.22 [ 101.163802] Lustre: DEBUG MARKER: -----============= acceptance-small: recovery-small ============----- Thu Apr 18 04:40:23 EDT 2024 [ 104.264243] Lustre: DEBUG MARKER: excepting tests: 136 [ 104.908527] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing check_config_client /mnt/lustre [ 108.817567] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 109.672642] Lustre: Modifying parameter general.lod.*.mdt_hash in log params [ 110.296393] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 111.803883] Lustre: DEBUG MARKER: == recovery-small test 1: create, chmod, stat: drop req, drop rep ========================================================== 04:40:34 (1713429634) [ 112.067972] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 128.086309] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 128.730988] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 128.734618] LustreError: 6701:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008e8d2680 x1796661112214656/t4294967298(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:197/0 lens 520/448 e 0 to 0 dl 1713429662 ref 1 fl Interpret:/200/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 144.743409] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 144.754285] Lustre: 6701:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008ea3ed80 x1796661112214656/t4294967298(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:213/0 lens 520/2880 e 0 to 0 dl 1713429678 ref 1 fl Interpret:/202/0 rc 0/0 job:'mcreate.0' uid:0 gid:0 [ 145.344777] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 161.360919] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 162.008547] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 162.013702] LustreError: 9816:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008eb66680 x1796661112216384/t4294967300(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:230/0 lens 488/456 e 0 to 0 dl 1713429695 ref 1 fl Interpret:/200/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 178.028063] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 178.041775] Lustre: 6702:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008e544a80 x1796661112216384/t4294967300(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:246/0 lens 488/3152 e 0 to 0 dl 1713429711 ref 1 fl Interpret:/202/0 rc 0/0 job:'tchmod.0' uid:0 gid:0 [ 178.658910] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 194.672148] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 195.175089] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 195.176692] LustreError: 9816:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008eb2e680 x1796661112217664/t0(0) o34->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:263/0 lens 472/464 e 0 to 0 dl 1713429728 ref 1 fl Interpret:/200/0 rc 0/0 job:'statone.0' uid:0 gid:0 [ 211.187586] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 214.697565] Lustre: DEBUG MARKER: == recovery-small test 4: open: drop req, drop rep ======= 04:42:17 (1713429737) [ 215.009292] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 231.025939] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 231.580071] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 231.582274] LustreError: 6705:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008e1ec380 x1796661112220160/t4294967306(0) o35->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:299/0 lens 392/456 e 0 to 0 dl 1713429764 ref 1 fl Interpret:/200/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 247.583922] Lustre: 6705:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008e1ef480 x1796661112220160/t4294967306(0) o35->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:315/0 lens 392/456 e 0 to 0 dl 1713429780 ref 1 fl Interpret:/202/0 rc 0/0 job:'cat.0' uid:0 gid:0 [ 251.038152] Lustre: DEBUG MARKER: == recovery-small test 5: rename: drop req, drop rep ===== 04:42:53 (1713429773) [ 251.355475] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 267.376310] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 267.380599] Lustre: Skipped 1 previous similar message [ 267.886841] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 267.888352] LustreError: 6716:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880130943480 x1796661112223104/t4294967310(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:336/0 lens 552/456 e 0 to 0 dl 1713429801 ref 1 fl Interpret:/200/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 283.888497] Lustre: 6716:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008e253800 x1796661112223104/t4294967310(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:352/0 lens 552/2888 e 0 to 0 dl 1713429817 ref 1 fl Interpret:/202/0 rc 0/0 job:'mv.0' uid:0 gid:0 [ 287.653232] Lustre: DEBUG MARKER: == recovery-small test 6: link, unlink: drop req, drop rep ========================================================== 04:43:30 (1713429810) [ 287.953083] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 304.465808] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 304.467956] LustreError: 8221:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008dcf1c00 x1796661112226496/t4294967315(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:372/0 lens 512/440 e 0 to 0 dl 1713429837 ref 1 fl Interpret:/200/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 320.466797] Lustre: 8221:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008dcbed80 x1796661112226496/t4294967315(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:388/0 lens 512/440 e 0 to 0 dl 1713429853 ref 1 fl Interpret:/202/0 rc 0/0 job:'link.0' uid:0 gid:0 [ 320.978361] Lustre: *** cfs_fail_loc=123, val=2147483648*** [ 336.998574] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 337.003189] Lustre: Skipped 3 previous similar messages [ 337.558305] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 337.559757] LustreError: 6703:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8801391d0700 x1796661112228800/t4294967317(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:405/0 lens 504/456 e 0 to 0 dl 1713429870 ref 1 fl Interpret:/200/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 353.559841] Lustre: 6702:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880139bf0700 x1796661112228800/t4294967317(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:421/0 lens 504/2888 e 0 to 0 dl 1713429886 ref 1 fl Interpret:/202/0 rc 0/0 job:'unlink.0' uid:0 gid:0 [ 357.463605] Lustre: DEBUG MARKER: == recovery-small test 8: touch: drop rep (bug 1423) ===== 04:44:39 (1713429879) [ 373.790503] Lustre: 9816:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8801325baa00 x1796661112230208/t4294967320(0) o36->7fb078a9-662f-4254-8dd6-83cc835d237f@192.168.204.4@tcp:442/0 lens 488/3152 e 0 to 0 dl 1713429907 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 377.061483] Lustre: DEBUG MARKER: == recovery-small test 9: pause bulk on OST (bug 1420) === 04:44:59 (1713429899) [ 377.575363] LustreError: 20739:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 5000ms [ 382.578585] LustreError: 20739:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 385.979620] Lustre: DEBUG MARKER: == recovery-small test 10a: finish request on server after client eviction (bug 1521) ========================================================== 04:45:08 (1713429908) [ 402.054778] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429908/real 1713429908] req@ffff88008e2b2d80 x1796661117498560/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429924 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 403.910602] Lustre: 8171:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429910/real 1713429910] req@ffff880129e93800 x1796661117498816/t0(0) o104->lustre-OST0001@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429926 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 403.920277] Lustre: 8171:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 418.064588] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429924/real 1713429924] req@ffff88008e2b2d80 x1796661117498560/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429940 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 418.073608] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 420.813606] Lustre: 21664:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429927/real 1713429927] req@ffff880130941180 x1796661117499264/t0(0) o104->lustre-OST0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429943 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 420.823729] Lustre: 21664:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 2 previous similar messages [ 426.075578] Lustre: mdt00_001: service thread pid 6702 was inactive for 40.020 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 426.081194] Pid: 6702, comm: mdt00_001 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 426.083213] Call Trace: [ 426.084085] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 426.085882] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 426.088078] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 426.090371] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 426.092440] [<0>] ldlm_cli_enqueue_local+0x1ec/0x880 [ptlrpc] [ 426.094693] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 426.096889] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 426.098832] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 426.101189] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 426.103043] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 426.104631] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 426.106354] [<0>] mdt_reint+0x67/0x150 [mdt] [ 426.107784] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 426.109355] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 426.111161] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 426.112793] [<0>] kthread+0xe4/0xf0 [ 426.113923] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 426.115307] [<0>] 0xfffffffffffffffe [ 427.995564] Lustre: ll_ost00_002: service thread pid 8171 was inactive for 40.085 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 427.995628] Pid: 8854, comm: ll_ost00_003 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 427.995629] Call Trace: [ 427.995705] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 427.995745] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 427.995775] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 427.995806] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 427.995849] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 427.995864] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 427.995869] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 427.995919] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 427.995954] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 427.995988] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 427.995994] [<0>] kthread+0xe4/0xf0 [ 427.995997] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 427.996009] [<0>] 0xfffffffffffffffe [ 428.022313] Lustre: Skipped 1 previous similar message [ 428.023851] Pid: 8171, comm: ll_ost00_002 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 428.026974] Call Trace: [ 428.027834] [<0>] ptlrpc_set_wait+0x7cf/0x850 [ptlrpc] [ 428.029806] [<0>] ldlm_run_ast_work+0xe3/0x400 [ptlrpc] [ 428.031366] [<0>] ldlm_handle_conflict_lock+0x70/0x300 [ptlrpc] [ 428.033153] [<0>] ldlm_lock_enqueue+0x5c2/0xbb0 [ptlrpc] [ 428.034611] [<0>] ldlm_cli_enqueue_local+0x377/0x880 [ptlrpc] [ 428.036484] [<0>] ofd_destroy_by_fid+0x1d1/0x520 [ofd] [ 428.037827] [<0>] ofd_destroy_hdl+0x20c/0xae0 [ofd] [ 428.039255] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 428.040880] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 428.043093] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 428.044857] [<0>] kthread+0xe4/0xf0 [ 428.046469] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 428.048229] [<0>] 0xfffffffffffffffe [ 428.891570] Lustre: ll_ost00_004: service thread pid 21664 was inactive for 40.078 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [ 434.076576] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429940/real 1713429940] req@ffff88008e2b2d80 x1796661117498560/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429956 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 450.085599] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429956/real 1713429956] req@ffff88008e2b2d80 x1796661117498560/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429972 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 450.095410] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 466.098600] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713429972/real 1713429972] req@ffff88008e2b2d80 x1796661117498560/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713429988 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 466.108544] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 498.111740] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713430004/real 1713430004] req@ffff88008e2b2d80 x1796661117498560/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713430020 ref 1 fl Rpc:XQr/2/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 498.121442] Lustre: 6702:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 498.124750] LustreError: 6702:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.204.4@tcp) failed to reply to blocking AST (req@ffff88008e2b2d80 x1796661117498560 status 0 rc -110), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff880129c321c0/0x5f9af70c76a16118 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344cd1a expref: 9 pid: 6702 timeout: 581 lvb_type: 0 [ 498.136387] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.204.4@tcp was evicted due to a lock blocking callback time out: rc -110 [ 498.139949] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.204.4@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff880129c321c0/0x5f9af70c76a16118 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344cd1a expref: 10 pid: 6702 timeout: 0 lvb_type: 0 [ 499.910824] LustreError: 8854:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.204.4@tcp) failed to reply to blocking AST (req@ffff88008e2b1c00 x1796661117498880 status 0 rc -110), evict it ns: filter-lustre-OST0000_UUID lock: ffff880129c30240/0x5f9af70c76a1608c lrc: 4/0,0 mode: PW/PW res: [0x240000400:0x5:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4194303) gid 0 flags: 0x60000400030020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344ccfe expref: 7 pid: 8854 timeout: 583 lvb_type: 0 [ 499.928235] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.204.4@tcp was evicted due to a lock blocking callback time out: rc -110 [ 499.928705] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 16s: evicting client at 192.168.204.4@tcp ns: filter-lustre-OST0001_UUID lock: ffff8800a6b6ef40/0x5f9af70c76a16038 lrc: 3/0,0 mode: PW/PW res: [0x280000400:0x4:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400030020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344ccdb expref: 7 pid: 8169 timeout: 0 lvb_type: 0 [ 499.947689] LustreError: Skipped 1 previous similar message [ 501.720301] Lustre: DEBUG MARKER: == recovery-small test 10b: re-send BL AST =============== 04:47:04 (1713430024) [ 521.297672] Lustre: DEBUG MARKER: == recovery-small test 10c: re-send BL AST vs reconnect race (LU-5569) ========================================================== 04:47:23 (1713430043) [ 522.375633] Lustre: lustre-MDT0000: Client 7fb078a9-662f-4254-8dd6-83cc835d237f (at 192.168.204.4@tcp) reconnecting [ 522.379194] Lustre: Skipped 2 previous similar messages [ 526.130107] Lustre: DEBUG MARKER: == recovery-small test 10d: test failed blocking ast ===== 04:47:28 (1713430048) [ 527.794249] LustreError: 21670:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.204.4@tcp) returned error from blocking AST (req@ffff88008d382d80 x1796661117516032 status -71 rc -71), evict it ns: filter-lustre-OST0000_UUID lock: ffff8800a6b6fa80/0x5f9af70c76a164bb lrc: 4/0,0 mode: PW/PW res: [0x240000400:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344ceef expref: 5 pid: 21670 timeout: 627 lvb_type: 0 [ 527.808110] LustreError: 21670:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) Skipped 1 previous similar message [ 527.811461] LustreError: 138-a: lustre-OST0000: A client on nid 192.168.204.4@tcp was evicted due to a lock blocking callback time out: rc -71 [ 527.815775] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.204.4@tcp ns: filter-lustre-OST0000_UUID lock: ffff8800a6b6fa80/0x5f9af70c76a164bb lrc: 3/0,0 mode: PW/PW res: [0x240000400:0x7:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344ceef expref: 6 pid: 21670 timeout: 0 lvb_type: 0 [ 527.828614] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message [ 531.803334] Lustre: DEBUG MARKER: == recovery-small test 10e: re-send BL AST vs reconnect race 2 ========================================================== 04:47:34 (1713430054) [ 532.198350] Lustre: DEBUG MARKER: SKIP: recovery-small test_10e need two clients [ 534.374091] Lustre: DEBUG MARKER: == recovery-small test 11: wake up a thread waiting for completion after eviction (b=2460) ========================================================== 04:47:36 (1713430056) [ 555.095941] Lustre: DEBUG MARKER: == recovery-small test 12: recover from timed out resend in ptlrpcd (b=2494) ========================================================== 04:47:57 (1713430077) [ 555.374200] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 597.722537] Lustre: DEBUG MARKER: == recovery-small test 13: mdc_readpage restart test (bug 1138) ========================================================== 04:48:40 (1713430120) [ 617.428758] Lustre: DEBUG MARKER: == recovery-small test 14: mdc_readpage resend test (bug 1138) ========================================================== 04:48:59 (1713430139) [ 617.721427] Lustre: *** cfs_fail_loc=106, val=0*** [ 617.723205] Lustre: Skipped 1 previous similar message [ 620.893988] Lustre: DEBUG MARKER: == recovery-small test 15: failed open (-ENOMEM) ========= 04:49:03 (1713430143) [ 621.130751] Lustre: *** cfs_fail_loc=128, val=0*** [ 624.045530] Lustre: DEBUG MARKER: == recovery-small test 16: timeout bulk put, don't evict client (2732) ========================================================== 04:49:06 (1713430146) [ 624.424383] Lustre: *** cfs_fail_loc=504, val=0*** [ 624.426483] LustreError: 20739:0:(ldlm_lib.c:3601:target_bulk_io()) @@@ truncated bulk READ 0(102400) req@ffff88008f3ea300 x1796661112271104/t0(0) o3->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:692/0 lens 488/440 e 0 to 0 dl 1713430157 ref 1 fl Interpret:/200/0 rc 0/0 job:'cmp.0' uid:0 gid:0 [ 624.438164] Lustre: lustre-OST0001: Bulk IO read error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc -110 [ 664.407842] Lustre: DEBUG MARKER: == recovery-small test 17a: timeout bulk get, don't evict client (2732) ========================================================== 04:49:46 (1713430186) [ 709.031885] Lustre: DEBUG MARKER: == recovery-small test 17b: timeout bulk get, dont evict client (3582) ========================================================== 04:50:31 (1713430231) [ 709.373256] Lustre: DEBUG MARKER: SKIP: recovery-small test_17b Needs multiple clients [ 711.253333] Lustre: DEBUG MARKER: == recovery-small test 18a: manual ost invalidate clears page cache immediately ========================================================== 04:50:33 (1713430233) [ 714.322954] Lustre: DEBUG MARKER: == recovery-small test 18b: eviction and reconnect clears page cache (2766) ========================================================== 04:50:36 (1713430236) [ 714.703542] Lustre: 32614:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 94c133cb-2b77-4504-8a26-3bda0ab0280b at adminstrative request [ 739.621436] Lustre: DEBUG MARKER: == recovery-small test 18c: Dropped connect reply after eviction handing (14755) ========================================================== 04:51:02 (1713430262) [ 740.037117] Lustre: 1041:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 94c133cb-2b77-4504-8a26-3bda0ab0280b at adminstrative request [ 741.292426] Lustre: *** cfs_fail_loc=225, val=0*** [ 741.293724] Lustre: Skipped 1 previous similar message [ 756.262392] Lustre: DEBUG MARKER: == recovery-small test 19a: test expired_lock_main on mds (2867) ========================================================== 04:51:18 (1713430278) [ 756.725777] Lustre: *** cfs_fail_loc=304, val=0*** [ 772.382368] Lustre: *** cfs_fail_loc=304, val=0*** [ 788.380610] Lustre: lustre-MDT0000: Client 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp) reconnecting [ 788.383948] Lustre: Skipped 6 previous similar messages [ 788.387766] Lustre: *** cfs_fail_loc=304, val=0*** [ 796.763693] ptlrpc_watchdog_fire: 1 callbacks suppressed [ 796.765609] Lustre: mdt00_005: service thread pid 21669 was inactive for 40.039 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 796.771685] Pid: 21669, comm: mdt00_005 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 796.773730] Call Trace: [ 796.775106] [<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc] [ 796.777339] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [ 796.779440] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [ 796.781624] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [ 796.783882] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [ 796.786064] [<0>] mdt_reint_setattr+0x73b/0x15f0 [mdt] [ 796.787730] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [ 796.789087] [<0>] mdt_reint_internal+0x74c/0xbc0 [mdt] [ 796.790559] [<0>] mdt_reint+0x67/0x150 [mdt] [ 796.792517] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 796.794403] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 796.796273] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 796.797964] [<0>] kthread+0xe4/0xf0 [ 796.799401] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 796.801441] [<0>] 0xfffffffffffffffe [ 804.396900] Lustre: *** cfs_fail_loc=304, val=0*** [ 820.405707] Lustre: *** cfs_fail_loc=304, val=0*** [ 836.441916] Lustre: *** cfs_fail_loc=304, val=0*** [ 852.444866] Lustre: *** cfs_fail_loc=304, val=0*** [ 856.923612] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.204.4@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff8800a6f446c0/0x5f9af70c76a16d66 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344d1c7 expref: 16 pid: 21669 timeout: 856 lvb_type: 0 [ 860.433055] Lustre: DEBUG MARKER: == recovery-small test 19b: test expired_lock_main on ost (2867) ========================================================== 04:53:02 (1713430382) [ 892.535761] Lustre: *** cfs_fail_loc=304, val=0*** [ 892.537516] Lustre: Skipped 2 previous similar messages [ 956.565748] Lustre: *** cfs_fail_loc=304, val=0*** [ 956.567473] Lustre: Skipped 3 previous similar messages [ 961.115590] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.204.4@tcp ns: filter-lustre-OST0000_UUID lock: ffff880129c33cc0/0x5f9af70c76a17037 lrc: 3/0,0 mode: PW/PW res: [0x240000400:0xe:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e344d34f expref: 6 pid: 8171 timeout: 960 lvb_type: 0 [ 964.545112] Lustre: DEBUG MARKER: == recovery-small test 19c: check reconnect and lock resend do not trigger expired_lock_main ========================================================== 04:54:46 (1713430486) [ 974.901595] Lustre: DEBUG MARKER: == recovery-small test 20a: ldlm_handle_enqueue error (should return error) ========================================================== 04:54:57 (1713430497) [ 978.128548] Lustre: DEBUG MARKER: == recovery-small test 20b: ldlm_handle_enqueue error (should return error) ========================================================== 04:55:00 (1713430500) [ 981.403712] Lustre: DEBUG MARKER: == recovery-small test 21a: drop close request while close and open are both in flight ========================================================== 04:55:03 (1713430503) [ 981.675007] LustreError: 21669:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 982.977550] LustreError: 21669:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 983.102802] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 1002.179166] Lustre: DEBUG MARKER: == recovery-small test 21b: drop open request while close and open are both in flight ========================================================== 04:55:24 (1713430524) [ 1146.609111] Lustre: DEBUG MARKER: == recovery-small test 21c: drop both request while close and open are both in flight ========================================================== 04:57:49 (1713430669) [ 1169.926340] Lustre: DEBUG MARKER: == recovery-small test 21d: drop close reply while close and open are both in flight ========================================================== 04:58:12 (1713430692) [ 1170.266200] LustreError: 6701:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout id 129 sleeping for 5000ms [ 1171.569518] LustreError: 6701:0:(mdt_open.c:1392:mdt_reint_open()) cfs_fail_timeout interrupted [ 1171.772779] Lustre: *** cfs_fail_loc=122, val=2147483648*** [ 1171.774159] LustreError: 23915:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800918cca80 x1796661112338816/t4294967554(0) o35->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:485/0 lens 392/456 e 0 to 0 dl 1713430705 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1171.782206] LustreError: 23915:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1187.774468] Lustre: 23915:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88012c074850 x1796661112338816/t4294967554(0) o35->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:501/0 lens 392/456 e 0 to 0 dl 1713430721 ref 1 fl Interpret:/202/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 1191.233476] Lustre: DEBUG MARKER: == recovery-small test 21e: drop open reply while close and open are both in flight ========================================================== 04:58:33 (1713430713) [ 1191.534132] LustreError: 6701:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008e1eed80 x1796661112343552/t4294967571(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:629/0 lens 488/456 e 0 to 0 dl 1713430849 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1332.540862] Lustre: lustre-MDT0000: Client 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp) reconnecting [ 1332.544012] Lustre: Skipped 14 previous similar messages [ 1332.553026] Lustre: 6702:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008d3c2d80 x1796661112343552/t4294967571(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:15/0 lens 488/3152 e 0 to 0 dl 1713430990 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1334.527052] Lustre: DEBUG MARKER: == recovery-small test 21f: drop both reply while close and open are both in flight ========================================================== 05:00:56 (1713430856) [ 1334.839428] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 1334.841289] Lustre: Skipped 1 previous similar message [ 1334.843111] LustreError: 6702:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008deee300 x1796661112353920/t4294967590(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:18/0 lens 488/456 e 0 to 0 dl 1713430993 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1352.337069] Lustre: 6702:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008deefb80 x1796661112353920/t4294967590(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:35/0 lens 488/3152 e 0 to 0 dl 1713431010 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1352.345287] Lustre: 6702:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1355.733868] Lustre: DEBUG MARKER: == recovery-small test 21g: drop open reply and close request while close and open are both in flight ========================================================== 05:01:18 (1713430878) [ 1356.040485] LustreError: 6702:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008f3e9c00 x1796661112359040/t4294967609(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:39/0 lens 488/456 e 0 to 0 dl 1713431014 ref 1 fl Interpret:/200/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1356.048406] LustreError: 6702:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 1357.493743] Lustre: *** cfs_fail_loc=115, val=2147483648*** [ 1357.495070] Lustre: Skipped 3 previous similar messages [ 1373.496160] Lustre: 8221:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008d039c00 x1796661112359040/t4294967609(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:56/0 lens 488/3152 e 0 to 0 dl 1713431031 ref 1 fl Interpret:/202/0 rc 0/0 job:'touch.0' uid:0 gid:0 [ 1376.724854] Lustre: DEBUG MARKER: == recovery-small test 21h: drop open request and close reply while close and open are both in flight ========================================================== 05:01:39 (1713430899) [ 1397.882557] Lustre: DEBUG MARKER: == recovery-small test 22: drop close request and do mknod ========================================================== 05:02:00 (1713430920) [ 1417.123594] Lustre: DEBUG MARKER: == recovery-small test 23: client hang when close a file after mds crash ========================================================== 05:02:19 (1713430939) [ 1423.091855] Lustre: Failing over lustre-MDT0000 [ 1423.205539] Lustre: server umount lustre-MDT0000 complete [ 1434.777427] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1434.880596] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1434.885640] Lustre: Skipped 1 previous similar message [ 1434.910813] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1434.931516] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1435.647041] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1437.363334] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1437.365545] Lustre: 16954:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 180, extend: 0 [ 1437.380228] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1437.396448] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:26 to 0x240000400:65) [ 1437.396546] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:21 to 0x280000400:65) [ 1437.913643] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1438.287338] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1439.917350] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1442.545767] Lustre: DEBUG MARKER: == recovery-small test 24a: fsync error (should return error) ========================================================== 05:02:44 (1713430964) [ 1442.912420] Lustre: 18461:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 94c133cb-2b77-4504-8a26-3bda0ab0280b at adminstrative request [ 1446.032729] Lustre: DEBUG MARKER: == recovery-small test 24b: test dirty page discard due to client eviction ========================================================== 05:02:48 (1713430968) [ 1446.414179] Lustre: 19246:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 94c133cb-2b77-4504-8a26-3bda0ab0280b at adminstrative request [ 1449.519232] Lustre: DEBUG MARKER: == recovery-small test 26a: evict dead exports =========== 05:02:51 (1713430971) [ 1449.990612] Lustre: DEBUG MARKER: SKIP: recovery-small test_26a msg and ost1 are at the same node [ 1451.859783] Lustre: DEBUG MARKER: == recovery-small test 26b: evict dead exports =========== 05:02:54 (1713430974) [ 1452.264148] Lustre: DEBUG MARKER: SKIP: recovery-small test_26b msg and ost1 are at the same node [ 1454.132816] Lustre: DEBUG MARKER: == recovery-small test 27: fail LOV while using OSC's ==== 05:02:56 (1713430976) [ 1455.658850] Lustre: Failing over lustre-MDT0000 [ 1455.730750] mdt00_002 (16885) used greatest stack depth: 10072 bytes left [ 1455.777306] Lustre: server umount lustre-MDT0000 complete [ 1467.394318] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1467.498313] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1467.504645] Lustre: Skipped 1 previous similar message [ 1467.556106] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1467.618996] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1467.632372] mount.lustre (21973) used greatest stack depth: 9912 bytes left [ 1468.368392] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1472.525148] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1472.531704] Lustre: Skipped 1 previous similar message [ 1474.236915] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1474.240488] Lustre: 22837:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 180, extend: 0 [ 1474.272375] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1474.279482] Lustre: 22837:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a1eb8000 x1796661112485888/t8589935291(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:156/0 lens 504/2888 e 0 to 0 dl 1713431131 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1474.287213] Lustre: 22837:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1474.289797] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:182 to 0x240000400:225) [ 1474.289861] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:179 to 0x280000400:225) [ 1560.747931] Lustre: Failing over lustre-MDT0000 [ 1560.843882] mdt00_003 (22837) used greatest stack depth: 9816 bytes left [ 1560.893826] Lustre: server umount lustre-MDT0000 complete [ 1565.667596] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713430947/real 1713430947] req@ffff88008ea3d180 x1796661117614272/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713431088 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1565.673804] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 1572.649995] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1572.684257] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1572.685046] LustreError: 137-5: lustre-MDT0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1572.693347] Lustre: Skipped 1 previous similar message [ 1572.867235] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1572.899403] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1573.709469] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1577.805246] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1577.807713] Lustre: Skipped 1 previous similar message [ 1579.468964] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1579.471336] Lustre: 31584:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 180, extend: 0 [ 1579.503580] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1579.508166] Lustre: 31584:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009c33fb80 x1796661118354048/t12884936756(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:261/0 lens 512/2888 e 0 to 0 dl 1713431236 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1579.523358] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6007 to 0x280000400:6049) [ 1579.527708] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6008 to 0x240000400:6049) [ 1582.580258] Lustre: DEBUG MARKER: == recovery-small test 28: handle error adding new clients (bug 6086) ========================================================== 05:05:04 (1713431104) [ 1598.665654] Lustre: 30968:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713431105/real 1713431105] req@ffff88008ac53100 x1796661118944256/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713431121 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 1598.674212] Lustre: 30968:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 1600.176280] Lustre: *** cfs_fail_loc=12f, val=0*** [ 1600.178054] LustreError: 21668:0:(tgt_lastrcvd.c:1071:tgt_client_new()) lustre-OST0000: no room for 1 clients - fix LR_MAX_CLIENTS [ 1605.700871] Lustre: Failing over lustre-MDT0000 [ 1605.826514] Lustre: server umount lustre-MDT0000 complete [ 1617.462984] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1617.575520] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1617.606620] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1617.626899] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1618.391967] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1620.140264] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1620.142678] Lustre: 1279:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-MDT0000: extended recovery timer reached hard limit: 180, extend: 0 [ 1620.156494] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1620.172968] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6007 to 0x280000400:6081) [ 1620.172970] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6008 to 0x240000400:6081) [ 1620.738140] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1621.123386] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1622.605394] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1622.608236] Lustre: Skipped 1 previous similar message [ 1625.291796] Lustre: DEBUG MARKER: == recovery-small test 29a: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 05:05:47 (1713431147) [ 1626.034940] Lustre: Failing over lustre-MDT0000 [ 1626.161625] Lustre: server umount lustre-MDT0000 complete [ 1628.078710] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1628.189362] Lustre: *** cfs_fail_loc=711, val=0*** [ 1628.189488] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1628.189489] Lustre: Skipped 1 previous similar message [ 1628.216513] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1628.235958] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1628.236073] Lustre: lustre-MDT0000: Aborting client recovery [ 1628.236076] LustreError: 3640:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1628.243667] Lustre: 3795:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1628.247074] Lustre: 3795:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 94c133cb-2b77-4504-8a26-3bda0ab0280b@ [ 1628.251703] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 1628.261351] Lustre: lustre-MDT0000-osd: cancel update llog [0x200000400:0x1:0x0] [ 1628.295527] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6007 to 0x280000400:6113) [ 1629.059896] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1633.213427] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1633.215366] Lustre: Skipped 1 previous similar message [ 1633.217763] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6008 to 0x240000400:6113) [ 1633.952158] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 [ 1633.996678] Lustre: DEBUG MARKER: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec [ 1636.999583] Lustre: DEBUG MARKER: == recovery-small test 29b: error adding new clients doesn't cause LBUG (bug 22273) ========================================================== 05:05:59 (1713431159) [ 1637.731931] Lustre: Failing over lustre-OST0000 [ 1637.763911] Lustre: server umount lustre-OST0000 complete [ 1638.220181] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1638.224907] Lustre: Skipped 1 previous similar message [ 1638.226356] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 1638.231087] LustreError: Skipped 1 previous similar message [ 1639.677820] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 1639.683681] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 1639.683805] Lustre: lustre-OST0000: Aborting recovery [ 1639.683810] LustreError: 6230:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-OST0000: Aborting recovery [ 1639.688980] Lustre: Skipped 2 previous similar messages [ 1639.690000] Lustre: 6260:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1639.693197] Lustre: 6260:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1639.696887] Lustre: 6260:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client 94c133cb-2b77-4504-8a26-3bda0ab0280b@ [ 1639.700159] Lustre: lustre-OST0000: disconnecting 2 stale clients [ 1639.718677] LustreError: 6260:0:(ofd_obd.c:1315:ofd_iocontrol()) lustre-OST0000: iocontrol from 'tgt_recover_0' cmd=c00866c1 _IOWR('f', 193, 8) unrecognized: rc = -25 [ 1640.252698] Lustre: *** cfs_fail_loc=711, val=0*** [ 1640.946150] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1641.617179] LustreError: 167-0: lustre-OST0000-osc-MDT0000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. [ 1641.621078] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1641.624697] Lustre: Skipped 1 previous similar message [ 1648.157440] Lustre: DEBUG MARKER: == recovery-small test 50: failover MDS under load ======= 05:06:10 (1713431170) [ 1658.694169] Lustre: Failing over lustre-MDT0000 [ 1658.831110] Lustre: server umount lustre-MDT0000 complete [ 1670.405335] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1670.637992] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1670.700538] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1670.702462] Lustre: Skipped 2 previous similar messages [ 1671.464716] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1675.283538] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1675.335868] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1675.354990] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:6813 to 0x240000400:6849) [ 1675.354996] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:6814 to 0x280000400:6849) [ 1675.548366] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1675.556051] Lustre: Skipped 1 previous similar message [ 1675.558848] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1675.926418] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1676.322525] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1676.531511] Lustre: 3022:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713431182/real 1713431182] req@ffff88009cd8aa00 x1796661119051392/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713431198 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1676.539293] Lustre: 3022:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 1738.095829] Lustre: Failing over lustre-MDT0000 [ 1738.216102] Lustre: server umount lustre-MDT0000 complete [ 1747.659655] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713431130/real 1713431130] req@ffff8800a1479f80 x1796661118947776/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713431270 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1747.668012] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 1750.064125] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1750.168670] Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1750.173211] Lustre: Skipped 1 previous similar message [ 1750.253852] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1750.309389] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1751.076282] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1755.213511] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1755.216874] Lustre: Skipped 1 previous similar message [ 1755.483958] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1755.528406] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1755.544976] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:11206 to 0x280000400:11233) [ 1755.544983] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:11206 to 0x240000400:11233) [ 1756.135227] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1756.569927] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1818.338519] Lustre: Failing over lustre-MDT0000 [ 1818.474897] Lustre: server umount lustre-MDT0000 complete [ 1830.067485] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1830.179820] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1830.186761] Lustre: Skipped 1 previous similar message [ 1830.290374] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1830.318368] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1831.155552] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1835.229094] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1835.231672] Lustre: Skipped 1 previous similar message [ 1835.677025] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1835.720942] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 1835.725141] Lustre: 17542:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880129c2e680 x1796661128102848/t34359765818(0) o36->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:394/0 lens 512/2888 e 0 to 0 dl 1713431369 ref 1 fl Interpret:/202/0 rc 0/0 job:'writemany.0' uid:0 gid:0 [ 1835.734719] Lustre: 17542:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 1 previous similar message [ 1835.735876] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:15785 to 0x240000400:15809) [ 1835.735948] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:15785 to 0x280000400:15809) [ 1836.227557] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713431342/real 1713431342] req@ffff88012ca27480 x1796661121031424/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713431358 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1836.238157] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 6 previous similar messages [ 1836.342979] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1836.799521] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1861.131068] Lustre: DEBUG MARKER: == recovery-small test 51: failover MDS during recovery == 05:09:43 (1713431383) [ 1862.922715] Lustre: Failing over lustre-MDT0000 [ 1863.037127] Lustre: server umount lustre-MDT0000 complete [ 1875.837406] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1876.851749] Lustre: DEBUG MARKER: test_51: failover in 1 sec [ 1878.511960] Lustre: Failing over lustre-MDT0000 [ 1878.525781] LustreError: 21292:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1878.534503] Lustre: 20580:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1878.539755] Lustre: 20580:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1878.544690] Lustre: lustre-MDT0000-osd: cancel update llog [0x200002b10:0x1:0x0] [ 1878.603308] Lustre: 20580:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1878.746692] Lustre: server umount lustre-MDT0000 complete [ 1891.416142] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1892.266409] Lustre: DEBUG MARKER: test_51: failover in 5 sec [ 1897.746057] Lustre: Failing over lustre-MDT0000 [ 1897.751192] LustreError: 22898:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1897.754240] Lustre: 22226:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1897.758137] Lustre: 22226:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1897.761684] Lustre: lustre-MDT0000-osd: cancel update llog [0x200004a50:0x1:0x0] [ 1897.791801] Lustre: 22226:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1897.905969] Lustre: server umount lustre-MDT0000 complete [ 1909.510308] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 1909.513675] LustreError: Skipped 2 previous similar messages [ 1910.439290] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1911.300726] Lustre: DEBUG MARKER: test_51: failover in 10 sec [ 1921.795256] Lustre: Failing over lustre-MDT0000 [ 1921.799609] LustreError: 24373:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1921.801661] Lustre: 23710:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1921.805338] Lustre: 23710:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1921.807847] Lustre: lustre-MDT0000-osd: cancel update llog [0x200005220:0x1:0x0] [ 1921.839127] Lustre: 23710:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1921.970808] Lustre: server umount lustre-MDT0000 complete [ 1934.483627] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1935.319410] Lustre: DEBUG MARKER: test_51: failover in 20 sec [ 1955.805366] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 1955.808336] LustreError: 25086:0:(obd_class.h:888:obd_reconnect()) Device 5 not setup [ 1955.809974] Lustre: Failing over lustre-MDT0000 [ 1955.814155] LustreError: 25865:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 1955.817131] Lustre: 25198:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 1955.819522] Lustre: 25198:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 1955.821544] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000059f0:0x1:0x0] [ 1955.838092] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 0 recovered and 1 was evicted. [ 1955.850390] Lustre: 25198:0:(mdt_handler.c:7951:mdt_postrecov()) lustre-MDT0000: auto trigger paused LFSCK failed: rc = -6 [ 1955.956137] Lustre: server umount lustre-MDT0000 complete [ 1967.665800] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 1967.669937] Lustre: Skipped 2 previous similar messages [ 1967.700005] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 1967.703285] Lustre: Skipped 4 previous similar messages [ 1967.724317] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 1967.727564] Lustre: Skipped 10 previous similar messages [ 1968.430378] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 1969.274928] Lustre: DEBUG MARKER: test_51: failover in 25 sec [ 1972.701697] Lustre: lustre-MDT0000-lwp-OST0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 1972.704804] Lustre: Skipped 3 previous similar messages [ 1975.699616] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713431482/real 1713431482] req@ffff8800a21efb80 x1796661121546240/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713431498 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1975.706175] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 1985.872109] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:17480 to 0x240000400:17505) [ 1985.872648] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:17480 to 0x280000400:17505) [ 1994.788514] Lustre: Failing over lustre-MDT0000 [ 1994.927645] Lustre: server umount lustre-MDT0000 complete [ 2007.738310] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 2008.578558] Lustre: DEBUG MARKER: test_51: failover in 30 sec [ 2010.925994] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:18145 to 0x240000400:18177) [ 2010.926002] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:18146 to 0x280000400:18177) [ 2039.110252] Lustre: Failing over lustre-MDT0000 [ 2039.224818] Lustre: server umount lustre-MDT0000 complete [ 2050.987082] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 2050.990466] LustreError: Skipped 3 previous similar messages [ 2052.037746] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 2055.907737] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 2055.911314] Lustre: Skipped 2 previous similar messages [ 2055.954119] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 2055.956333] Lustre: Skipped 2 previous similar messages [ 2055.970097] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:20132 to 0x280000400:20161) [ 2055.970112] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:20132 to 0x240000400:20161) [ 2075.777052] Lustre: DEBUG MARKER: == recovery-small test 52: failover OST under load ======= 05:13:18 (1713431598) [ 2086.575954] Lustre: Failing over lustre-OST0000 [ 2086.765906] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_create to node 0@lo failed: rc = -107 [ 2086.769484] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 2088.616140] Lustre: server umount lustre-OST0000 complete [ 2091.028677] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2096.042938] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2096.048035] LustreError: Skipped 1 previous similar message [ 2101.532641] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 2103.930318] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2104.352371] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2417.224558] Lustre: Failing over lustre-OST0000 [ 2417.259042] Lustre: server umount lustre-OST0000 complete [ 2417.438058] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_create to node 0@lo failed: rc = -107 [ 2417.441293] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 2417.446444] Lustre: Skipped 6 previous similar messages [ 2417.448390] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2417.455722] LustreError: Skipped 1 previous similar message [ 2425.835973] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2425.839692] LustreError: Skipped 2 previous similar messages [ 2429.034103] Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180 [ 2429.037927] Lustre: Skipped 3 previous similar messages [ 2429.042761] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 2429.045403] Lustre: Skipped 3 previous similar messages [ 2430.135097] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 2430.137855] Lustre: Skipped 1 previous similar message [ 2430.300756] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 2431.002428] Lustre: lustre-OST0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 2431.006283] Lustre: Skipped 1 previous similar message [ 2431.008628] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 2431.012236] Lustre: Skipped 6 previous similar messages [ 2432.681179] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2433.113211] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2702.271407] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x280000400 to 0x280000401 [ 2748.295656] Lustre: Failing over lustre-OST0000 [ 2748.339477] Lustre: server umount lustre-OST0000 complete [ 2748.350959] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 2748.354825] LustreError: Skipped 1 previous similar message [ 2760.323399] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 2761.723195] Lustre: lustre-OST0000: Will be in recovery for at least 1:00, or until 2 clients reconnect [ 2761.991844] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 2762.102266] Lustre: lustre-OST0000: Recovery over after 0:01, of 2 clients 2 recovered and 0 were evicted. [ 2764.375065] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2764.786220] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2782.200819] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x240000400 to 0x240000bd0 [ 3041.809299] Lustre: DEBUG MARKER: == recovery-small test 53a: touch: drop rep ============== 05:29:24 (1713432564) [ 3042.339490] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3042.342187] Lustre: Skipped 3 previous similar messages [ 3042.344437] LustreError: 30790:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8800a027e680 x1796661198734784/t0(0) o101->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:90/0 lens 576/688 e 0 to 0 dl 1713432575 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3042.354526] LustreError: 30790:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 1 previous similar message [ 3058.349362] Lustre: lustre-MDT0000: Client 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp) reconnecting [ 3058.351704] Lustre: Skipped 4 previous similar messages [ 3061.647026] Lustre: DEBUG MARKER: == recovery-small test 53b: touch: drop rep ============== 05:29:43 (1713432583) [ 3062.145410] LustreError: 31430:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009faff800 x1796661198741056/t0(0) o101->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:110/0 lens 576/688 e 0 to 0 dl 1713432595 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3081.534812] Lustre: DEBUG MARKER: == recovery-small test 53c: touch: drop rep ============== 05:30:03 (1713432603) [ 3082.002408] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3082.004918] Lustre: Skipped 1 previous similar message [ 3082.007137] LustreError: 30789:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009fb1df80 x1796661198742528/t51540004447(0) o101->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:130/0 lens 664/664 e 0 to 0 dl 1713432615 ref 1 fl Interpret:/200/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3098.003327] Lustre: 30834:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88009ea78a80 x1796661198742528/t51540004447(0) o101->94c133cb-2b77-4504-8a26-3bda0ab0280b@192.168.204.4@tcp:146/0 lens 664/3488 e 0 to 0 dl 1713432631 ref 1 fl Interpret:H/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 3098.009681] Lustre: 30834:0:(mdt_recovery.c:148:mdt_req_from_lrd()) Skipped 2 previous similar messages [ 3101.221573] Lustre: DEBUG MARKER: == recovery-small test 54: back in time ================== 05:30:23 (1713432623) [ 3111.838020] Lustre: Failing over lustre-MDT0000 [ 3111.984481] Lustre: server umount lustre-MDT0000 complete [ 3123.568273] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3123.673946] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3123.679780] Lustre: Skipped 2 previous similar messages [ 3123.705549] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3123.726209] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 3123.728430] Lustre: Skipped 1 previous similar message [ 3124.440810] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3126.339276] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:22327 to 0x280000401:22401) [ 3126.339334] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:18635 to 0x240000bd0:18945) [ 3126.841094] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3127.190379] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3128.701223] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 3128.703862] Lustre: Skipped 1 previous similar message [ 3131.450271] Lustre: DEBUG MARKER: == recovery-small test 55: ost_brw_read/write drops timed-out read/write request ========================================================== 05:30:53 (1713432653) [ 3131.699520] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713432638/real 1713432638] req@ffff880133e0d880 x1796661136092416/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713432654 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 3131.705350] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 11 previous similar messages [ 3135.240645] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3135.242024] Lustre: Skipped 3 previous similar messages [ 3135.243197] LustreError: 8174:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3135.247226] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3150.347782] Lustre: lustre-OST0000: Client 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp) reconnecting [ 3150.350343] Lustre: Skipped 2 previous similar messages [ 3150.353961] LustreError: 23235:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3150.354212] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3150.354213] Lustre: Skipped 9 previous similar messages [ 3150.364588] LustreError: 23235:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 16 previous similar messages [ 3166.373523] LustreError: 23235:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3166.373635] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3166.373636] Lustre: Skipped 7 previous similar messages [ 3166.382826] LustreError: 23235:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3182.377220] LustreError: 21107:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3182.377238] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3182.377240] Lustre: Skipped 8 previous similar messages [ 3182.385618] LustreError: 21107:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 8 previous similar messages [ 3198.409112] LustreError: 23235:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3198.409124] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3198.409125] Lustre: Skipped 20 previous similar messages [ 3198.426569] LustreError: 23235:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 32 previous similar messages [ 3214.414204] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3214.414832] LustreError: 21107:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3214.414851] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3214.414853] Lustre: Skipped 20 previous similar messages [ 3214.437335] Lustre: Skipped 88 previous similar messages [ 3230.427925] LustreError: 17990:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3230.428666] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3230.428668] Lustre: Skipped 20 previous similar messages [ 3230.450999] LustreError: 17990:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 40 previous similar messages [ 3262.437169] LustreError: 21107:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3262.437225] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3262.437228] Lustre: Skipped 41 previous similar messages [ 3262.456057] LustreError: 21107:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 40 previous similar messages [ 3310.443007] Lustre: lustre-OST0000: Client 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp) reconnecting [ 3310.449668] Lustre: Skipped 9 previous similar messages [ 3326.458276] LustreError: 17990:0:(tgt_handler.c:2780:tgt_brw_write()) lustre-OST0000: Dropping timed-out write from 12345-192.168.204.4@tcp because locking object 0x240000bd0:18947 took 0 seconds (limit was 11). [ 3326.458322] Lustre: lustre-OST0000: Bulk IO write error with 94c133cb-2b77-4504-8a26-3bda0ab0280b (at 192.168.204.4@tcp), client will retry: rc = -110 [ 3326.458324] Lustre: Skipped 82 previous similar messages [ 3326.465889] LustreError: 17990:0:(tgt_handler.c:2780:tgt_brw_write()) Skipped 83 previous similar messages [ 3342.466907] Lustre: *** cfs_fail_loc=21d, val=0*** [ 3342.469517] Lustre: Skipped 165 previous similar messages [ 3413.625591] Lustre: DEBUG MARKER: == recovery-small test 56: do not fail on getattr resend ========================================================== 05:35:36 (1713432936) [ 3413.877113] LustreError: 20559:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3453.879605] LustreError: 20559:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 awake [ 3457.688528] Lustre: DEBUG MARKER: == recovery-small test 57: read procfs entries causes kernel crash ========================================================== 05:36:20 (1713432980) [ 3459.453000] Lustre: Failing over lustre-MDT0000 [ 3459.587240] Lustre: server umount lustre-MDT0000 complete [ 3461.599800] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3461.812999] Lustre: lustre-MDT0000: Aborting client recovery [ 3461.813272] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:18948 to 0x240000bd0:18977) [ 3461.813280] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:22403 to 0x280000401:22433) [ 3462.863272] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3469.958728] Lustre: DEBUG MARKER: == recovery-small test 58: Eviction in the middle of open RPC reply processing ========================================================== 05:36:32 (1713432992) [ 3487.089687] Lustre: 24518:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713432993/real 1713432993] req@ffff88009fb1df80 x1796661136131584/t0(0) o104->lustre-MDT0000@192.168.204.4@tcp:15/16 lens 328/224 e 0 to 1 dl 1713433009 ref 1 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 [ 3487.106431] Lustre: 24518:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 3494.670966] Lustre: DEBUG MARKER: == recovery-small test 59: Read cancel race on client eviction ========================================================== 05:36:56 (1713433016) [ 3504.979904] LustreError: 21708:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.204.4@tcp) returned error from blocking AST (req@ffff8800a6500a80 x1796661136134720 status -107 rc -107), evict it ns: filter-lustre-OST0001_UUID lock: ffff88009dad1440/0x5f9af70c776444cd lrc: 4/0,0 mode: PW/PW res: [0x280000401:0x57a2:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e362b282 expref: 5 pid: 8171 timeout: 3604 lvb_type: 0 [ 3504.996009] LustreError: 138-a: lustre-OST0001: A client on nid 192.168.204.4@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3505.000629] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.204.4@tcp ns: filter-lustre-OST0001_UUID lock: ffff88009dad1440/0x5f9af70c776444cd lrc: 3/0,0 mode: PW/PW res: [0x280000401:0x57a2:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->4095) gid 0 flags: 0x60000400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e362b282 expref: 6 pid: 8171 timeout: 0 lvb_type: 0 [ 3509.041845] Lustre: DEBUG MARKER: == recovery-small test 60: Add Changelog entries during MDS failover ========================================================== 05:37:11 (1713433031) [ 3509.085244] LustreError: 24518:0:(ldlm_lockd.c:780:ldlm_handle_ast_error()) ### client (nid 192.168.204.4@tcp) returned error from blocking AST (req@ffff8800a6501880 x1796661136135232 status -107 rc -107), evict it ns: mdt-lustre-MDT0000_UUID lock: ffff8801342b7840/0x5f9af70c776444e9 lrc: 4/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e362b290 expref: 6 pid: 25930 timeout: 3608 lvb_type: 0 [ 3509.116088] LustreError: 138-a: lustre-MDT0000: A client on nid 192.168.204.4@tcp was evicted due to a lock blocking callback time out: rc -107 [ 3509.121383] LustreError: 6691:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 0s: evicting client at 192.168.204.4@tcp ns: mdt-lustre-MDT0000_UUID lock: ffff8801342b7840/0x5f9af70c776444e9 lrc: 3/0,0 mode: PR/PR res: [0x200000007:0x1:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e362b290 expref: 7 pid: 25930 timeout: 0 lvb_type: 0 [ 3510.143303] Lustre: lustre-MDD0000: changelog on [ 3526.772976] Lustre: lustre-OST0000: haven't heard from client 2175d288-d19d-4dc4-a286-d3c7ce1aabc4 (at 192.168.204.4@tcp) in 32 seconds. I think it's dead, and I am evicting it. exp ffff8800a0c86000, cur 1713433049 expire 1713433019 last 1713433017 [ 3536.882999] Lustre: Failing over lustre-MDT0000 [ 3537.128291] Lustre: server umount lustre-MDT0000 complete [ 3549.677142] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3549.958464] Lustre: lustre-MDD0000: changelog on [ 3551.051731] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3554.849323] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 3554.854889] Lustre: Skipped 1 previous similar message [ 3554.937999] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 3554.941961] Lustre: Skipped 1 previous similar message [ 3554.960172] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24935 to 0x280000401:24961) [ 3554.960189] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21479 to 0x240000bd0:21505) [ 3588.640795] Lustre: lustre-MDD0000: changelog off [ 3594.001268] Lustre: DEBUG MARKER: == recovery-small test 61: Verify to not reuse orphan objects - bug 17025 ========================================================== 05:38:36 (1713433116) [ 3595.366790] LustreError: 1108:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 3595.672177] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3596.830529] Lustre: Failing over lustre-MDT0000 [ 3596.985777] Lustre: server umount lustre-MDT0000 complete [ 3599.820619] Lustre: lustre-MDT0000: Aborting client recovery [ 3599.822089] LustreError: 2081:0:(ldlm_lib.c:2927:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery [ 3599.824484] Lustre: 2234:0:(ldlm_lib.c:2310:target_recovery_overseer()) recovery is aborted, evict exports in recovery [ 3599.827687] Lustre: 2234:0:(ldlm_lib.c:2310:target_recovery_overseer()) Skipped 2 previous similar messages [ 3599.830611] Lustre: 2234:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-MDT0000: disconnect stale client 8c7f8608-334c-497a-ac9a-ad95bd13a625@ [ 3599.835232] Lustre: 2234:0:(genops.c:1516:class_disconnect_stale_exports()) Skipped 1 previous similar message [ 3599.837546] Lustre: lustre-MDT0000: disconnecting 1 stale clients [ 3599.849155] Lustre: lustre-MDT0000-osd: cancel update llog [0x2000061c0:0x1:0x0] [ 3599.878615] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21479 to 0x240000bd0:21537) [ 3599.878621] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24935 to 0x280000401:24993) [ 3600.733204] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3608.493576] Lustre: DEBUG MARKER: == recovery-small test 65: lock enqueue for destroyed export ========================================================== 05:38:50 (1713433130) [ 3609.063512] LustreError: 21668:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3609.082327] Lustre: *** cfs_fail_loc=31e, val=0*** [ 3609.084384] Lustre: Skipped 2 previous similar messages [ 3611.068352] LustreError: 8854:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e sleeping for 6000ms [ 3613.407895] Lustre: 3786:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 8c7f8608-334c-497a-ac9a-ad95bd13a625 at adminstrative request [ 3613.414416] LustreError: 11171:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 3615.068502] LustreError: 21668:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout id 31e awake [ 3615.073202] LustreError: 21668:0:(ldlm_lockd.c:1499:ldlm_handle_enqueue()) ### lock on destroyed export ffff8800b4127000 ns: filter-lustre-OST0000_UUID lock: ffff88013019f3c0/0x5f9af70c776b9dce lrc: 3/0,0 mode: --/PW res: [0x240000bd0:0x5423:0x0].0x0 rrc: 4 type: EXT [0->4095] (req 0->4095) gid 0 flags: 0x70000000020020 nid: 192.168.204.4@tcp remote: 0x34fdcba6e363ba98 expref: 3 pid: 21668 timeout: 0 lvb_type: 0 [ 3615.773514] LustreError: 8854:0:(ldlm_lockd.c:1477:ldlm_handle_enqueue()) cfs_fail_timeout interrupted [ 3624.656844] Lustre: lustre-OST0000: Client 1a4706d2-a9c6-4a73-9e49-cd1e9c362656 (at 192.168.204.4@tcp) reconnecting [ 3624.659384] Lustre: Skipped 8 previous similar messages [ 3629.170970] Lustre: DEBUG MARKER: == recovery-small test 66: lock enqueue re-send vs client eviction ========================================================== 05:39:11 (1713433151) [ 3629.731327] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 3629.735500] LustreError: 2176:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88008aef5180 x1796661201092224/t0(0) o101->8c7f8608-334c-497a-ac9a-ad95bd13a625@192.168.204.4@tcp:678/0 lens 576/688 e 0 to 0 dl 1713433163 ref 1 fl Interpret:/200/0 rc 0/0 job:'stat.0' uid:0 gid:0 [ 3631.656859] LustreError: 2176:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout id 136 sleeping for 40000ms [ 3634.001507] Lustre: 4885:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 8c7f8608-334c-497a-ac9a-ad95bd13a625 at adminstrative request [ 3634.362502] LustreError: 2176:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) cfs_fail_timeout interrupted [ 3634.367225] LustreError: 2176:0:(mdt_handler.c:2320:mdt_getattr_name_lock()) Skipped 1 previous similar message [ 3638.890550] Lustre: DEBUG MARKER: == recovery-small test 67: connect vs import invalidate race ========================================================== 05:39:21 (1713433161) [ 3641.263646] Lustre: 5737:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 8c7f8608-334c-497a-ac9a-ad95bd13a625 at adminstrative request [ 3651.860606] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713433119/real 1713433119] req@ffff88008fb05180 x1796661136477056/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713433174 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 3657.436096] Lustre: DEBUG MARKER: == recovery-small test 100: IR: Make sure normal recovery still works w/o IR ========================================================== 05:39:39 (1713433179) [ 3659.212416] Lustre: Failing over lustre-OST0000 [ 3659.232741] Lustre: server umount lustre-OST0000 complete [ 3659.868497] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3659.874290] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3659.882278] LustreError: Skipped 5 previous similar messages [ 3664.892940] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3664.900428] LustreError: Skipped 1 previous similar message [ 3673.501833] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3677.558895] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3678.198189] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3684.607583] Lustre: DEBUG MARKER: == recovery-small test 101a: IR: Make sure IR works w/o normal recovery ========================================================== 05:40:06 (1713433206) [ 3685.877610] Lustre: Failing over lustre-OST0000 [ 3685.899636] Lustre: server umount lustre-OST0000 complete [ 3686.749416] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3686.756055] LustreError: Skipped 2 previous similar messages [ 3698.343205] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3700.160801] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3702.987919] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3703.576866] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3710.031656] Lustre: DEBUG MARKER: == recovery-small test 101b: IR: Make sure IR works w/o normal recovery and proceed EAGAIN ========================================================== 05:40:32 (1713433232) [ 3711.758340] Lustre: Failing over lustre-OST0000 [ 3711.777164] Lustre: server umount lustre-OST0000 complete [ 3711.806502] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3711.815029] LustreError: Skipped 3 previous similar messages [ 3724.227627] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3724.239089] Lustre: lustre-OST0000: in recovery but waiting for the first client to connect [ 3724.239185] LustreError: 12544:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 sleeping for 25000ms [ 3724.250659] Lustre: Skipped 6 previous similar messages [ 3749.338677] LustreError: 12544:0:(ofd_dev.c:651:ofd_prepare()) cfs_fail_timeout id 247 awake [ 3751.167196] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3754.289579] Lustre: lustre-OST0000-osc-MDT0000: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 3754.294547] Lustre: Skipped 9 previous similar messages [ 3755.159512] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3755.745261] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3761.592137] Lustre: DEBUG MARKER: == recovery-small test 102: IR: New client gets updated nidtbl after MGS restart ========================================================== 05:41:23 (1713433283) [ 3763.013295] Lustre: Failing over lustre-OST0000 [ 3763.032137] Lustre: server umount lustre-OST0000 complete [ 3764.285154] Lustre: lustre-OST0000-osc-MDT0000: Connection to lustre-OST0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 3764.292617] Lustre: Skipped 9 previous similar messages [ 3764.295863] LustreError: 137-5: lustre-OST0000: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server. [ 3764.303395] LustreError: Skipped 3 previous similar messages [ 3775.573486] Lustre: lustre-OST0000: Imperative Recovery enabled, recovery window shrunk from 60-180 down to 60-180 [ 3777.334884] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3780.265371] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3780.855106] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3783.970527] Lustre: Failing over lustre-MDT0000 [ 3784.174279] Lustre: server umount lustre-MDT0000 complete [ 3786.673554] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 3786.680354] LustreError: Skipped 1 previous similar message [ 3786.872949] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 3786.874736] Lustre: Skipped 4 previous similar messages [ 3786.906106] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24995 to 0x280000401:25025) [ 3786.906565] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21541 to 0x240000bd0:21569) [ 3787.845972] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3789.313855] Lustre: Failing over lustre-OST0000 [ 3789.363048] Lustre: server umount lustre-OST0000 complete [ 3791.852853] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3800.875593] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713433307/real 1713433307] req@ffff880134021500 x1796661136507392/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713433323 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 3800.890559] Lustre: 3024:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 3803.676972] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3806.551773] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3812.552667] Lustre: DEBUG MARKER: == recovery-small test 103: IR: MDS can start w/o MGS and get updated nidtbl later ========================================================== 05:42:14 (1713433334) [ 3813.456306] Lustre: DEBUG MARKER: SKIP: recovery-small test_103 needs separate mgs and mds [ 3816.265610] Lustre: DEBUG MARKER: == recovery-small test 104: IR: ost can disable IR voluntarily ========================================================== 05:42:18 (1713433338) [ 3817.568144] Lustre: Failing over lustre-OST0000 [ 3817.586909] Lustre: server umount lustre-OST0000 complete [ 3818.076433] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_statfs to node 0@lo failed: rc = -107 [ 3820.391167] mount.lustre (20938) used greatest stack depth: 9608 bytes left [ 3822.190767] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3829.411597] Lustre: DEBUG MARKER: == recovery-small test 105: IR: NON IR clients support === 05:42:31 (1713433351) [ 3829.983596] Lustre: DEBUG MARKER: SKIP: recovery-small test_105 Needs multiple clients [ 3832.842223] Lustre: DEBUG MARKER: == recovery-small test 106: lightweight connection support ========================================================== 05:42:35 (1713433355) [ 3834.943694] LustreError: 23213:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 3835.276558] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 3836.035325] Lustre: Failing over lustre-MDT0000 [ 3836.182369] Lustre: server umount lustre-MDT0000 complete [ 3850.029256] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3850.894839] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24995 to 0x280000401:25057) [ 3850.894843] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21571 to 0x240000bd0:21601) [ 3851.078986] LustreError: 23526:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 192.168.204.4@tcp arrived at 1713433373 with bad export cookie 6889090212931806568 [ 3851.086961] LustreError: 23526:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 3856.241217] Lustre: DEBUG MARKER: == recovery-small test 107: drop reint reply, then restart MDT ========================================================== 05:42:58 (1713433378) [ 3856.627533] Lustre: *** cfs_fail_loc=119, val=2147483648*** [ 3856.630426] LustreError: 24092:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff880091760380 x1796661201122304/t77309411332(0) o36->77b3e3fd-20b3-41c7-a052-9010c6e2817f@192.168.204.4@tcp:150/0 lens 504/448 e 0 to 0 dl 1713433390 ref 1 fl Interpret:/200/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3857.637221] Lustre: Failing over lustre-MDT0000 [ 3857.821477] Lustre: server umount lustre-MDT0000 complete [ 3871.703818] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3872.664412] Lustre: 26519:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880134060700 x1796661201122304/t77309411332(0) o36->77b3e3fd-20b3-41c7-a052-9010c6e2817f@192.168.204.4@tcp:166/0 lens 504/2880 e 0 to 0 dl 1713433406 ref 1 fl Interpret:/202/0 rc 0/0 job:'mkdir.0' uid:0 gid:0 [ 3872.680500] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24995 to 0x280000401:25089) [ 3872.680533] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21571 to 0x240000bd0:21633) [ 3874.570609] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3875.170184] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3880.583252] Lustre: DEBUG MARKER: == recovery-small test 108: client eviction don't crash == 05:43:22 (1713433402) [ 3880.971355] Lustre: 27994:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 77b3e3fd-20b3-41c7-a052-9010c6e2817f at adminstrative request [ 3886.840017] Lustre: DEBUG MARKER: == recovery-small test 110a: create remote directory: drop client req ========================================================== 05:43:29 (1713433409) [ 3887.406067] Lustre: DEBUG MARKER: SKIP: recovery-small test_110a needs >= 2 MDTs [ 3890.237263] Lustre: DEBUG MARKER: == recovery-small test 110b: create remote directory: drop Master rep ========================================================== 05:43:32 (1713433412) [ 3890.790155] Lustre: DEBUG MARKER: SKIP: recovery-small test_110b needs >= 2 MDTs [ 3893.714948] Lustre: DEBUG MARKER: == recovery-small test 110c: create remote directory: drop update rep on slave MDT ========================================================== 05:43:35 (1713433415) [ 3894.281525] Lustre: DEBUG MARKER: SKIP: recovery-small test_110c needs >= 2 MDTs [ 3897.156493] Lustre: DEBUG MARKER: == recovery-small test 110d: remove remote directory: drop client req ========================================================== 05:43:39 (1713433419) [ 3897.731010] Lustre: DEBUG MARKER: SKIP: recovery-small test_110d needs >= 2 MDTs [ 3900.449315] Lustre: DEBUG MARKER: == recovery-small test 110e: remove remote directory: drop master rep ========================================================== 05:43:42 (1713433422) [ 3900.921738] Lustre: DEBUG MARKER: SKIP: recovery-small test_110e needs >= 2 MDTs [ 3903.768890] Lustre: DEBUG MARKER: == recovery-small test 110f: remove remote directory: drop slave rep ========================================================== 05:43:46 (1713433426) [ 3904.266631] Lustre: DEBUG MARKER: SKIP: recovery-small test_110f needs >= 2 MDTs [ 3907.262140] Lustre: DEBUG MARKER: == recovery-small test 110g: drop reply during migration ========================================================== 05:43:49 (1713433429) [ 3907.807250] Lustre: DEBUG MARKER: SKIP: recovery-small test_110g needs >= 2 MDTs [ 3910.616853] Lustre: DEBUG MARKER: == recovery-small test 110h: drop update reply during cross-MDT file rename ========================================================== 05:43:52 (1713433432) [ 3911.177036] Lustre: DEBUG MARKER: SKIP: recovery-small test_110h needs >= 2 MDTs [ 3914.150612] Lustre: DEBUG MARKER: == recovery-small test 110i: drop update reply during cross-MDT dir rename ========================================================== 05:43:56 (1713433436) [ 3914.715554] Lustre: DEBUG MARKER: SKIP: recovery-small test_110i needs >= 2 MDTs [ 3917.636821] Lustre: DEBUG MARKER: == recovery-small test 110j: drop update reply during cross-MDT ln ========================================================== 05:43:59 (1713433439) [ 3918.164765] Lustre: DEBUG MARKER: SKIP: recovery-small test_110j needs >= 2 MDTs [ 3921.048981] Lustre: DEBUG MARKER: == recovery-small test 110k: FID_QUERY failed during recovery ========================================================== 05:44:03 (1713433443) [ 3921.631407] Lustre: DEBUG MARKER: SKIP: recovery-small test_110k needs >= 2 MDTS [ 3924.497982] Lustre: DEBUG MARKER: == recovery-small test 110m: update resent vs original RPC race ========================================================== 05:44:06 (1713433446) [ 3925.416048] Lustre: DEBUG MARKER: SKIP: recovery-small test_110m needs at least 2 MDTs [ 3928.170146] Lustre: DEBUG MARKER: == recovery-small test 111: mdd setup fail should not cause umount oops ========================================================== 05:44:10 (1713433450) [ 3929.144853] Lustre: Failing over lustre-MDT0000 [ 3929.315722] Lustre: server umount lustre-MDT0000 complete [ 3932.295645] Lustre: *** cfs_fail_loc=151, val=0*** [ 3932.297893] LustreError: 3533:0:(mdd_device.c:687:mdd_changelog_init()) lustre-MDD0000: changelog setup during init failed: rc = -5 [ 3932.302228] LustreError: 3533:0:(mdd_device.c:1402:mdd_prepare()) lustre-MDD0000: failed to initialize changelog: rc = -5 [ 3932.307014] LustreError: 3533:0:(tgt_mount.c:2223:server_fill_super()) Unable to start targets: -5 [ 3932.312143] Lustre: Failing over lustre-MDT0000 [ 3932.467212] Lustre: server umount lustre-MDT0000 complete [ 3932.468795] LustreError: 3533:0:(super25.c:189:lustre_fill_super()) llite: Unable to mount : rc = -5 [ 3934.429877] LustreError: 4041:0:(ldlm_resource.c:1128:ldlm_resource_complain()) MGC192.168.204.104@tcp: namespace resource [0x65727473756c:0x0:0x0].0x0 (ffff8800a74ef000) refcount nonzero (1) after lock cleanup; forcing cleanup. [ 3934.442145] LustreError: 6699:0:(mgc_request.c:627:do_requeue()) failed processing log: -5 [ 3935.815918] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 3937.786024] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21635 to 0x240000bd0:21665) [ 3937.786045] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:24995 to 0x280000401:25121) [ 3940.722386] Lustre: DEBUG MARKER: == recovery-small test 112a: bulk resend while orignal request is in progress ========================================================== 05:44:22 (1713433462) [ 3941.287325] LustreError: 17990:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 sleeping for 20000ms [ 3961.293666] LustreError: 17990:0:(tgt_handler.c:2714:tgt_brw_write()) cfs_fail_timeout id 214 awake [ 3965.649505] Lustre: DEBUG MARKER: == recovery-small test 115a: read: late REQ MDunlink and no bulk ========================================================== 05:44:47 (1713433487) [ 3974.495777] Lustre: DEBUG MARKER: == recovery-small test 115b: write: late REQ MDunlink and no bulk ========================================================== 05:44:56 (1713433496) [ 3978.605262] Lustre: *** cfs_fail_loc=215, val=2*** [ 3978.607927] Lustre: Skipped 63 previous similar messages [ 3983.147045] Lustre: DEBUG MARKER: == recovery-small test 115c: read: late Reply MDunlink and no bulk ========================================================== 05:45:05 (1713433505) [ 3989.111400] Lustre: DEBUG MARKER: == recovery-small test 115d: write: late Reply MDunlink and no bulk ========================================================== 05:45:11 (1713433511) [ 3995.213271] Lustre: DEBUG MARKER: == recovery-small test 115e: read: late Bulk MDunlink and no reply ========================================================== 05:45:17 (1713433517) [ 4001.309983] Lustre: DEBUG MARKER: == recovery-small test 115f: read: late REQ MDunlink and no reply ========================================================== 05:45:23 (1713433523) [ 4004.716320] LustreError: 18822:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff8801342a6d80 x1796661136546816/t0(0) o13->lustre-MDT0000-mdtlov_UUID@0@lo:298/0 lens 224/368 e 0 to 0 dl 1713433538 ref 1 fl Interpret:/200/0 rc 0/0 job:'osp-pre-1-0.0' uid:0 gid:0 [ 4009.861159] Lustre: DEBUG MARKER: == recovery-small test 115g: read: late REQ MDunlink and Reply MDunlink ========================================================== 05:45:32 (1713433532) [ 4073.494567] Lustre: DEBUG MARKER: == recovery-small test 120: flock race: completion vs. evict ========================================================== 05:46:35 (1713433595) [ 4075.925844] Lustre: 12112:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 77b3e3fd-20b3-41c7-a052-9010c6e2817f at adminstrative request [ 4089.998037] Lustre: 12393:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 77b3e3fd-20b3-41c7-a052-9010c6e2817f at adminstrative request [ 4090.004150] Lustre: 12393:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 4110.803065] Lustre: 12802:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 77b3e3fd-20b3-41c7-a052-9010c6e2817f at adminstrative request [ 4110.809638] Lustre: 12802:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 2 previous similar messages [ 4136.711106] Lustre: DEBUG MARKER: == recovery-small test 113: ldlm enqueue dropped reply should not cause deadlocks ========================================================== 05:47:39 (1713433659) [ 4137.013283] Lustre: *** cfs_fail_loc=157, val=2147483648*** [ 4137.014477] Lustre: Skipped 1 previous similar message [ 4137.015538] LustreError: 4051:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009074c700 x1796661201177408/t0(0) o101->77b3e3fd-20b3-41c7-a052-9010c6e2817f@192.168.204.4@tcp:430/0 lens 576/688 e 0 to 0 dl 1713433670 ref 1 fl Interpret:/200/0 rc 0/0 job:'stat.0' uid:0 gid:0 [ 4161.550860] Lustre: DEBUG MARKER: == recovery-small test 130a: enqueue resend on not existing file ========================================================== 05:48:03 (1713433683) [ 4162.088065] LustreError: 4052:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4172.090570] LustreError: 4052:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4207.239273] Lustre: DEBUG MARKER: == recovery-small test 130b: enqueue resend on a stale inode ========================================================== 05:48:49 (1713433729) [ 4217.806558] LustreError: 4075:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4262.725700] Lustre: lustre-MDT0000: Client 77b3e3fd-20b3-41c7-a052-9010c6e2817f (at 192.168.204.4@tcp) reconnecting [ 4262.730611] Lustre: Skipped 6 previous similar messages [ 4262.735795] Lustre: *** cfs_fail_loc=217, val=0*** [ 4266.711362] Lustre: DEBUG MARKER: == recovery-small test 130c: layout intent resend on a stale inode ========================================================== 05:49:48 (1713433788) [ 4269.346048] LustreError: 4052:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 sleeping for 10000ms [ 4269.348344] LustreError: 4052:0:(mdt_handler.c:5180:mdt_intent_opc()) Skipped 1 previous similar message [ 4279.349606] LustreError: 4052:0:(mdt_handler.c:5180:mdt_intent_opc()) cfs_fail_timeout id 160 awake [ 4294.788074] Lustre: DEBUG MARKER: == recovery-small test 132: long punch =================== 05:50:16 (1713433816) [ 4367.451611] Lustre: ll_ost_io00_006: service thread pid 17990 was inactive for 72.019 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 4367.460363] Pid: 17990, comm: ll_ost_io00_006 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 4367.464481] Call Trace: [ 4367.465794] [<0>] __cfs_fail_timeout_set+0xe9/0x210 [libcfs] [ 4367.468971] [<0>] ofd_punch_hdl+0xa8c/0xb40 [ofd] [ 4367.471500] [<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc] [ 4367.475543] [<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc] [ 4367.480238] [<0>] ptlrpc_main+0xc76/0x1690 [ptlrpc] [ 4367.484022] [<0>] kthread+0xe4/0xf0 [ 4367.486811] [<0>] ret_from_fork_nospec_begin+0x7/0x21 [ 4367.490351] [<0>] 0xfffffffffffffffe [ 4415.532535] LustreError: 17990:0:(ofd_dev.c:2089:ofd_punch_hdl()) cfs_fail_timeout id 236 awake [ 4420.370245] Lustre: DEBUG MARKER: == recovery-small test 131: IO vs evict results to IO under staled lock ========================================================== 05:52:22 (1713433942) [ 4422.329298] Lustre: 18749:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting 77b3e3fd-20b3-41c7-a052-9010c6e2817f at adminstrative request [ 4422.336160] Lustre: 18749:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 3 previous similar messages [ 4422.340789] LustreError: 11171:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout id 31e sleeping for 4000ms [ 4422.346408] LustreError: 11171:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) Skipped 1 previous similar message [ 4425.151678] LustreError: 11171:0:(ldlm_lockd.c:2996:ldlm_bl_thread_exports()) cfs_fail_timeout interrupted [ 4428.962985] Lustre: DEBUG MARKER: == recovery-small test 133: don't fail on flock resend === 05:52:31 (1713433951) [ 4430.489214] LustreError: 4051:0:(ldlm_lib.c:3271:target_send_reply_msg()) @@@ dropping reply req@ffff88009a1c6d80 x1796661201206976/t0(0) o101->77b3e3fd-20b3-41c7-a052-9010c6e2817f@192.168.204.4@tcp:7/0 lens 328/344 e 0 to 0 dl 1713434002 ref 1 fl Interpret:/200/0 rc 0/0 job:'multiop.0' uid:0 gid:0 [ 4430.501402] LustreError: 4051:0:(ldlm_lib.c:3271:target_send_reply_msg()) Skipped 2 previous similar messages [ 4490.029223] Lustre: DEBUG MARKER: == recovery-small test 134: race between failover and search for reply data free slot ========================================================== 05:53:32 (1713434012) [ 4490.589343] Lustre: DEBUG MARKER: SKIP: recovery-small test_134 Need 2+ clients, have 1 [ 4493.401955] Lustre: DEBUG MARKER: == recovery-small test 135: DOM: open/create resend to return size ========================================================== 05:53:35 (1713434015) [ 4548.978042] Lustre: 4051:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88008d0c3b80 x1796661201212352/t85899346086(0) o101->77b3e3fd-20b3-41c7-a052-9010c6e2817f@192.168.204.4@tcp:126/0 lens 648/3488 e 0 to 0 dl 1713434121 ref 1 fl Interpret:/202/0 rc 0/0 job:'openfile.0' uid:0 gid:0 [ 4550.933964] Lustre: DEBUG MARKER: SKIP: recovery-small test_136 skipping excluded test 136 [ 4552.249831] Lustre: DEBUG MARKER: == recovery-small test 137: late resend must be skipped if already applied ========================================================== 05:54:34 (1713434074) [ 4553.625916] LustreError: 14079:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_race id 525 sleeping [ 4558.631605] LustreError: 14079:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 awake: rc=0 [ 4558.659738] LustreError: 14079:0:(mdt_reint.c:855:mdt_reint_setattr()) cfs_fail_race id 525 waking [ 4610.373527] Lustre: DEBUG MARKER: == recovery-small test 138: Umount MDT during recovery === 05:55:32 (1713434132) [ 4610.953276] Lustre: DEBUG MARKER: SKIP: recovery-small test_138 needs >= 2 MDTs [ 4613.446509] Lustre: DEBUG MARKER: == recovery-small test 139: corrupted catid won't cause crash ========================================================== 05:55:35 (1713434135) [ 4613.975876] Lustre: DEBUG MARKER: SKIP: recovery-small test_139 needs >= 2 MDTs [ 4616.444567] Lustre: DEBUG MARKER: == recovery-small test 140a: local mount is flagged properly ========================================================== 05:55:38 (1713434138) [ 4617.577744] Lustre: lustre-MDT0000: local client 41761acf-4df9-4f39-be95-86487c53c7fd w/o recovery [ 4617.582529] Lustre: Mounted lustre-client [ 4618.287278] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4619.776529] Lustre: Unmounted lustre-client [ 4621.095916] Lustre: Mounted lustre-client [ 4621.851381] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4623.323591] Lustre: Unmounted lustre-client [ 4628.363208] Lustre: DEBUG MARKER: == recovery-small test 140b: local mount is excluded from recovery ========================================================== 05:55:50 (1713434150) [ 4629.661153] Lustre: lustre-MDT0000: local client c2b362bb-2396-4b28-a409-ac61d0efa5d7 w/o recovery [ 4629.670742] Lustre: Mounted lustre-client [ 4630.409223] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4631.486479] LustreError: 26920:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 4631.801039] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 4632.889563] Lustre: Unmounted lustre-client [ 4633.886492] Lustre: Failing over lustre-MDT0000 [ 4634.027908] Lustre: server umount lustre-MDT0000 complete [ 4646.521591] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 4646.532014] LustreError: Skipped 4 previous similar messages [ 4646.709264] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 4646.718159] Lustre: Skipped 12 previous similar messages [ 4646.754742] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 4646.758595] Lustre: Skipped 6 previous similar messages [ 4646.797596] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 4646.801468] Lustre: Skipped 6 previous similar messages [ 4647.694759] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 4647.698898] Lustre: Skipped 9 previous similar messages [ 4647.727093] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 4647.731722] Lustre: Skipped 9 previous similar messages [ 4647.753264] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25134 to 0x280000401:25153) [ 4647.753270] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21680 to 0x240000bd0:21697) [ 4647.887346] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4650.722794] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4651.284121] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4651.742411] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 4651.746536] Lustre: Skipped 11 previous similar messages [ 4657.781798] Lustre: DEBUG MARKER: == recovery-small test 141: do not lose locks on MGS restart ========================================================== 05:56:20 (1713434180) [ 4658.644233] Lustre: DEBUG MARKER: SKIP: recovery-small test_141 cannot run in local mode or from build tree [ 4661.423297] Lustre: DEBUG MARKER: == recovery-small test 142: orphan name stub can be cleaned up in startup ========================================================== 05:56:23 (1713434183) [ 4661.798277] Lustre: *** cfs_fail_loc=165, val=0*** [ 4662.472177] Lustre: Failing over lustre-MDT0000 [ 4662.650038] Lustre: server umount lustre-MDT0000 complete [ 4666.846003] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4667.767193] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25134 to 0x280000401:25185) [ 4667.767295] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21699 to 0x240000bd0:21729) [ 4672.491910] Lustre: DEBUG MARKER: == recovery-small test 143: orphan cleanup thread shouldn't be blocked even delete failed ========================================================== 05:56:34 (1713434194) [ 4673.208603] Lustre: Failing over lustre-MDT0000 [ 4673.387829] Lustre: server umount lustre-MDT0000 complete [ 4680.700950] Lustre: lustre-MDT0000: Not available for connect from 0@lo (not set up) [ 4680.705166] Lustre: Skipped 1 previous similar message [ 4681.904982] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4682.806008] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:21699 to 0x240000bd0:21761) [ 4682.806095] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:25134 to 0x280000401:25217) [ 4682.815026] LustreError: 2648:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 4683.487360] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 4688.436352] Lustre: DEBUG MARKER: == recovery-small test 144a: MDT failover should stop precreation threads ========================================================== 05:56:50 (1713434210) [ 4689.756323] Lustre: 3022:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713434157/real 1713434157] req@ffff88009342c380 x1796661136620864/t0(0) o400->lustre-MDT0000-lwp-OST0000@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713434212 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 4689.767601] Lustre: 3022:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 13 previous similar messages [ 4690.105969] Lustre: Failing over lustre-OST0000 [ 4690.168829] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_create to node 0@lo failed: rc = -107 [ 4690.171360] Lustre: lustre-OST0000: Not available for connect from 0@lo (stopping) [ 4692.193391] Lustre: server umount lustre-OST0000 complete [ 4692.763971] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4692.775600] LustreError: Skipped 8 previous similar messages [ 4702.780251] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4702.788393] LustreError: Skipped 1 previous similar message [ 4706.526002] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4709.076374] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4709.473238] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4730.755595] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713434198/real 1713434198] req@ffff880089a95880 x1796661136631680/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713434253 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 4730.768870] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 4771.402408] Lustre: Failing over lustre-MDT0000 [ 4771.660187] Lustre: server umount lustre-MDT0000 complete [ 4785.682141] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4787.780886] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:46762 to 0x240000bd0:46817) [ 4787.780923] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:50218 to 0x280000401:50273) [ 4787.783855] LustreError: 8132:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 4788.612589] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4789.199571] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4791.264285] Lustre: Failing over lustre-MDT0000 [ 4791.430637] Lustre: server umount lustre-MDT0000 complete [ 4805.447283] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4807.532111] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000bd0:46762 to 0x240000bd0:46849) [ 4807.532162] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000401:50218 to 0x280000401:50305) [ 4807.532884] LustreError: 9845:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 4808.338010] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4808.896504] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4827.890371] Lustre: DEBUG MARKER: == recovery-small test 144b: orphan cleanup shouldn't be blocked for no objects+failover situation ========================================================== 05:59:10 (1713434350) [ 4829.679618] Lustre: Failing over lustre-OST0000 [ 4829.680770] LustreError: 11-0: lustre-OST0000-osc-MDT0000: operation ost_destroy to node 0@lo failed: rc = -19 [ 4829.683159] LustreError: Skipped 2 previous similar messages [ 4829.759535] Lustre: lustre-OST0000: Not available for connect from 192.168.204.4@tcp (stopping) [ 4831.634603] Lustre: 3023:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713434297/real 1713434297] req@ffff880090a95180 x1796661136706496/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713434352 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 4831.939574] Lustre: server umount lustre-OST0000 complete [ 4833.196687] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 4846.016331] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 4848.344282] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4848.752805] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4855.624418] Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x280000401 to 0x280000402 [ 4858.409975] Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x240000bd0 to 0x2400013a0 [ 4987.173133] Lustre: DEBUG MARKER: == recovery-small test 144c: reconnection during orphan cleanup shouldn't lose LAST_ID synchronization ========================================================== 06:01:49 (1713434509) [ 5021.818549] Lustre: Failing over lustre-MDT0000 [ 5022.373730] Lustre: server umount lustre-MDT0000 complete [ 5026.917836] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 5028.491304] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5029.353554] LustreError: 5317:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout id 254 sleeping for 5000ms [ 5029.353909] LustreError: 20504:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 5029.363266] LustreError: 5317:0:(ofd_dev.c:1523:ofd_create_hdl()) Skipped 1 previous similar message [ 5034.366568] LustreError: 5317:0:(ofd_dev.c:1523:ofd_create_hdl()) cfs_fail_timeout id 254 awake [ 5034.370551] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000402:9899 to 0x280000402:9985) [ 5034.453583] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2400013a0:15187 to 0x2400013a0:16577) [ 5034.789893] Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 0@lo) reconnecting [ 5034.791534] Lustre: Skipped 3 previous similar messages [ 5052.076309] Lustre: DEBUG MARKER: == recovery-small test 145: connect mdtlovs and process update logs after recovery expire ========================================================== 06:02:54 (1713434574) [ 5052.649884] Lustre: DEBUG MARKER: SKIP: recovery-small test_145 needs >= 3 MDTs [ 5055.507000] Lustre: DEBUG MARKER: == recovery-small test 146: test eviction is counted properly ========================================================== 06:02:57 (1713434577) [ 5056.192267] Lustre: 22298:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-MDT0000: evicting 77b3e3fd-20b3-41c7-a052-9010c6e2817f at adminstrative request [ 5061.209686] Lustre: DEBUG MARKER: == recovery-small test 147: Check client reconnect ======= 06:03:03 (1713434583) [ 5062.009157] Lustre: *** cfs_fail_loc=225, val=0*** [ 5079.875596] Lustre: 3022:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713434547/real 1713434547] req@ffff8800852ea300 x1796661143211712/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713434602 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5079.888592] Lustre: 3022:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 3 previous similar messages [ 5214.066681] Lustre: lustre-MDT0000: haven't heard from client lustre-MDT0000-lwp-OST0000_UUID (at 0@lo) in 50 seconds. I think it's dead, and I am evicting it. exp ffff880090156000, cur 1713434736 expire 1713434706 last 1713434686 [ 5215.099775] Lustre: lustre-OST0000: haven't heard from client 77b3e3fd-20b3-41c7-a052-9010c6e2817f (at 192.168.204.4@tcp) in 153 seconds. I think it's dead, and I am evicting it. exp ffff88008fe6f000, cur 1713434737 expire 1713434707 last 1713434584 [ 5215.103678] Lustre: Skipped 1 previous similar message [ 5229.846796] Lustre: DEBUG MARKER: == recovery-small test 148: data corruption through resend ========================================================== 06:05:52 (1713434752) [ 5258.678558] LustreError: 17990:0:(tgt_handler.c:2880:tgt_brw_write()) cfs_fail_timeout id 227 awake [ 5258.682593] LustreError: 17990:0:(tgt_handler.c:2880:tgt_brw_write()) Skipped 1 previous similar message [ 5265.703981] Lustre: DEBUG MARKER: == recovery-small test 149: skip orphan removal at umount ========================================================== 06:06:27 (1713434787) [ 5266.234825] Lustre: DEBUG MARKER: SKIP: recovery-small test_149 needs >= 2 MDTs [ 5269.005537] Lustre: DEBUG MARKER: == recovery-small test 150: statfs when MDT0 offline with lazystatfs option ========================================================== 06:06:31 (1713434791) [ 5269.555265] Lustre: DEBUG MARKER: SKIP: recovery-small test_150 needs >= 2 MDTs [ 5272.345326] Lustre: DEBUG MARKER: == recovery-small test 152: QoS object allocation could be awakened in case of OST failover ========================================================== 06:06:34 (1713434794) [ 5273.244093] Lustre: DEBUG MARKER: SKIP: recovery-small test_152 MDS Linux kernel does not support killable semaphore [ 5275.975287] Lustre: DEBUG MARKER: == recovery-small test 153: evict vs reconnect race ====== 06:06:38 (1713434798) [ 5299.958973] Lustre: Failing over lustre-MDT0000 [ 5300.106831] Lustre: server umount lustre-MDT0000 complete [ 5303.205779] LustreError: 166-1: MGC192.168.204.104@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail [ 5303.211637] LustreError: Skipped 5 previous similar messages [ 5303.395484] Lustre: lustre-MDT0000-lwp-OST0001: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete [ 5303.403147] Lustre: Skipped 16 previous similar messages [ 5303.451045] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180 [ 5303.456037] Lustre: Skipped 7 previous similar messages [ 5303.505142] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect [ 5303.509052] Lustre: Skipped 7 previous similar messages [ 5304.597586] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 5306.157779] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 [ 5307.141003] Lustre: lustre-MDT0000: Will be in recovery for at least 1:00, or until 1 client reconnects [ 5307.145256] Lustre: Skipped 7 previous similar messages [ 5307.169404] Lustre: lustre-MDT0000: Recovery over after 0:01, of 1 clients 1 recovered and 0 were evicted. [ 5307.173869] Lustre: Skipped 7 previous similar messages [ 5307.198328] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000402:9987 to 0x280000402:10017) [ 5307.198492] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2400013a0:16581 to 0x2400013a0:16609) [ 5307.198739] LustreError: 28808:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 5308.446225] Lustre: lustre-MDT0000-lwp-OST0001: Connection restored to 192.168.204.104@tcp (at 0@lo) [ 5308.450822] Lustre: Skipped 16 previous similar messages [ 5315.589041] Lustre: DEBUG MARKER: == recovery-small test 154a: corruption update llog can be skipped ========================================================== 06:07:17 (1713434837) [ 5316.165368] Lustre: DEBUG MARKER: SKIP: recovery-small test_154a needs >= 2 MDTs [ 5319.026889] Lustre: DEBUG MARKER: == recovery-small test 154b: restore update llog after failed recovery ========================================================== 06:07:21 (1713434841) [ 5319.581057] Lustre: DEBUG MARKER: SKIP: recovery-small test_154b needs >= 2 MDTs [ 5322.405979] Lustre: DEBUG MARKER: == recovery-small test 155: failover after client remount ========================================================== 06:07:24 (1713434844) [ 5325.395413] LustreError: 30907:0:(osd_handler.c:698:osd_ro()) lustre-MDT0000: *** setting device osd-zfs read-only *** [ 5325.738515] Lustre: DEBUG MARKER: mds1 REPLAY BARRIER on lustre-MDT0000 [ 5326.464393] Lustre: Failing over lustre-MDT0000 [ 5326.640959] Lustre: server umount lustre-MDT0000 complete [ 5339.753994] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000402:9987 to 0x280000402:10049) [ 5339.753996] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2400013a0:16611 to 0x2400013a0:16641) [ 5339.754956] LustreError: 31943:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 5340.580838] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 5346.064242] Lustre: DEBUG MARKER: == recovery-small test 156: tot_granted miscount after client eviction ========================================================== 06:07:48 (1713434868) [ 5346.711284] Lustre: Setting parameter general.timeout in log params [ 5348.166097] LustreError: 1045:0:(osd_handler.c:698:osd_ro()) lustre-OST0000: *** setting device osd-zfs read-only *** [ 5348.487672] Lustre: DEBUG MARKER: ost1 REPLAY BARRIER on lustre-OST0000 [ 5349.503790] Lustre: Failing over lustre-OST0000 [ 5349.690907] Lustre: server umount lustre-OST0000 complete [ 5349.726557] LustreError: 137-5: lustre-OST0000: not available for connect from 192.168.204.4@tcp (no target). If you are running an HA pair check that the target is mounted on the other server. [ 5349.735313] LustreError: Skipped 2 previous similar messages [ 5363.953269] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug -1 all [ 5383.221686] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713434850/real 1713434850] req@ffff880092e89180 x1796661143240448/t0(0) o400->lustre-MDT0000-lwp-OST0001@0@lo:12/10 lens 224/224 e 0 to 1 dl 1713434905 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 5383.234730] Lustre: 3025:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 5403.635570] Lustre: lustre-OST0000: recovery is timed out, evict stale exports [ 5403.639386] Lustre: 2023:0:(genops.c:1516:class_disconnect_stale_exports()) lustre-OST0000: disconnect stale client eb3958bd-cd90-4922-baa6-717a7902842c@192.168.204.4@tcp [ 5403.646877] Lustre: lustre-OST0000: disconnecting 1 stale clients [ 5403.650747] Lustre: 2023:0:(ldlm_lib.c:1992:extend_recovery_timer()) lustre-OST0000: extended recovery timer reached hard limit: 45, extend: 1 [ 5403.675789] Lustre: 2023:0:(ldlm_lib.c:2874:target_recovery_thread()) too long recovery - read logs [ 5403.680850] LustreError: dumping log to /tmp/lustre-log.1713434926.2023 [ 5408.584251] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5409.135598] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 5413.099122] Lustre: Modifying parameter general.timeout in log params [ 5415.852935] Lustre: DEBUG MARKER: == recovery-small test 157: eviction during mmaped i/o === 06:08:58 (1713434938) [ 5417.287587] Lustre: 3764:0:(genops.c:1659:obd_export_evict_by_uuid()) lustre-OST0000: evicting eb3958bd-cd90-4922-baa6-717a7902842c at adminstrative request [ 5417.294041] Lustre: 3764:0:(genops.c:1659:obd_export_evict_by_uuid()) Skipped 1 previous similar message [ 5421.925039] Lustre: DEBUG MARKER: == recovery-small test complete, duration 5321 sec ======= 06:09:04 (1713434944) [ 5506.732251] Lustre: Failing over lustre-MDT0000 [ 5506.802099] LustreError: 3024:0:(client.c:1281:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff880123855c00 x1796661143836160/t0(0) o6->lustre-OST0000-osc-MDT0000@0@lo:28/4 lens 544/432 e 0 to 0 dl 0 ref 1 fl Rpc:QU/200/ffffffff rc 0/-1 job:'osp-syn-0-0.0' uid:0 gid:0 [ 5506.937166] Lustre: server umount lustre-MDT0000 complete [ 5520.084291] Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000402:9987 to 0x280000402:10081) [ 5520.084382] Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x2400013a0:16644 to 0x2400013a0:16673) [ 5520.086795] LustreError: 9711:0:(mdd_orphans.c:452:mdd_orphan_index_iterate()) lustre-MDD0000: bad FID [0x0:0x0:0x0] cleaning 'PENDING' [ 5520.738922] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all [ 5523.541725] Lustre: DEBUG MARKER: oleg404-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5524.092384] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5529.676256] Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping) [ 5529.679586] Lustre: Skipped 1 previous similar message [ 5532.416147] Lustre: server umount lustre-MDT0000 complete [ 5533.905388] LustreError: 6686:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1713435056 with bad export cookie 6889090212933434495 [ 5533.913688] LustreError: 6686:0:(ldlm_lockd.c:2594:ldlm_cancel_handler()) Skipped 2 previous similar messages [ 5533.922986] Lustre: server umount lustre-OST0000 complete [ 5535.403928] Lustre: server umount lustre-OST0001 complete [ 5539.225050] Lustre: DEBUG MARKER: oleg404-server.virtnet: executing unload_modules_local [ 5539.976112] Key type lgssc unregistered [ 5540.063443] LNet: 11788:0:(lib-ptl.c:966:lnet_clear_lazy_portal()) Active lazy portal 0 on exit [ 5540.068082] LNet: Removed LNI 192.168.204.104@tcp