[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Initializing cgroup subsys cpuacct [ 0.000000] Linux version 3.10.0-7.9-debug (green@centos7-base) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Sat Mar 26 23:28:42 EDT 2022 [ 0.000000] Command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] e820: BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffd9fff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bffda000-0x00000000bfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013edfffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 3.0.0 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000000] e820: last_pfn = 0x13ee00 max_arch_pfn = 0x400000000 [ 0.000000] MTRR default type: write-back [ 0.000000] MTRR fixed ranges enabled: [ 0.000000] 00000-9FFFF write-back [ 0.000000] A0000-BFFFF uncachable [ 0.000000] C0000-FFFFF write-protect [ 0.000000] MTRR variable ranges enabled: [ 0.000000] 0 base 0000C0000000 mask 3FFFC0000000 uncachable [ 0.000000] 1 disabled [ 0.000000] 2 disabled [ 0.000000] 3 disabled [ 0.000000] 4 disabled [ 0.000000] 5 disabled [ 0.000000] 6 disabled [ 0.000000] 7 disabled [ 0.000000] PAT configuration [0-7]: WB WC UC- UC WB WP UC- UC [ 0.000000] e820: last_pfn = 0xbffda max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f5410-0x000f541f] mapped at [ffffffffff200410] [ 0.000000] Base memory trampoline at [ffff880000099000] 99000 size 24576 [ 0.000000] Using GB pages for direct mapping [ 0.000000] BRK [0x023c1000, 0x023c1fff] PGTABLE [ 0.000000] BRK [0x023c2000, 0x023c2fff] PGTABLE [ 0.000000] BRK [0x023c3000, 0x023c3fff] PGTABLE [ 0.000000] BRK [0x023c4000, 0x023c4fff] PGTABLE [ 0.000000] BRK [0x023c5000, 0x023c5fff] PGTABLE [ 0.000000] BRK [0x023c6000, 0x023c6fff] PGTABLE [ 0.000000] RAMDISK: [mem 0xbc2f2000-0xbffcffff] [ 0.000000] Early table checksum verification disabled [ 0.000000] ACPI: RSDP 00000000000f5220 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 00000000bffe1d6f 00034 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 00000000bffe1c0b 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 00000000bffe0040 01BCB (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: FACS 00000000bffe0000 00040 [ 0.000000] ACPI: APIC 00000000bffe1c7f 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 00000000bffe1d0f 00038 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: WAET 00000000bffe1d47 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000013edfffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x13e5e3000-0x13e609fff] [ 0.000000] Reserving 128MB of memory at 768MB for crashkernel (System RAM: 4077MB) [ 0.000000] kvm-clock: cpu 0, msr 1:3e592001, primary cpu clock [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: using sched offset of 302045187 cycles [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x00001000-0x00ffffff] [ 0.000000] DMA32 [mem 0x01000000-0xffffffff] [ 0.000000] Normal [mem 0x100000000-0x13edfffff] [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x00001000-0x0009efff] [ 0.000000] node 0: [mem 0x00100000-0xbffd9fff] [ 0.000000] node 0: [mem 0x100000000-0x13edfffff] [ 0.000000] Initmem setup node 0 [mem 0x00001000-0x13edfffff] [ 0.000000] On node 0 totalpages: 1043832 [ 0.000000] DMA zone: 64 pages used for memmap [ 0.000000] DMA zone: 21 pages reserved [ 0.000000] DMA zone: 3998 pages, LIFO batch:0 [ 0.000000] DMA32 zone: 12224 pages used for memmap [ 0.000000] DMA32 zone: 782298 pages, LIFO batch:31 [ 0.000000] Normal zone: 4024 pages used for memmap [ 0.000000] Normal zone: 257536 pages, LIFO batch:31 [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] ACPI: IRQ0 used by override. [ 0.000000] ACPI: IRQ5 used by override. [ 0.000000] ACPI: IRQ9 used by override. [ 0.000000] ACPI: IRQ10 used by override. [ 0.000000] ACPI: IRQ11 used by override. [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: Registered nosave memory: [mem 0xbffda000-0xbfffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [ 0.000000] e820: [mem 0xc0000000-0xfeffbfff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:5120 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 [ 0.000000] percpu: Embedded 38 pages/cpu s115176 r8192 d32280 u524288 [ 0.000000] pcpu-alloc: s115176 r8192 d32280 u524288 alloc=1*2097152 [ 0.000000] pcpu-alloc: [0] 0 1 2 3 [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] kvm-stealtime: cpu 0, msr 13e2135c0 [ 0.000000] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes) [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027499 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: rd.shell root=nbd:192.168.200.253:centos7:ext4:ro:-p,-b4096 ro crashkernel=128M panic=1 nomodeset ipmtu=9000 ip=dhcp rd.neednet=1 init_on_free=off mitigations=off console=ttyS1,115200 audit=0 [ 0.000000] audit: disabled (until reboot) [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] x86/fpu: xstate_offset[2]: 0240, xstate_sizes[2]: 0100 [ 0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form [ 0.000000] Memory: 3820316k/5224448k available (8172k kernel code, 1049120k absent, 355012k reserved, 5773k data, 2532k init) [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU restricting CPUs from NR_CPUS=5120 to nr_cpu_ids=4. [ 0.000000] Offload RCU callbacks from all CPUs [ 0.000000] Offload RCU callbacks from CPUs: 0-3. [ 0.000000] NR_IRQS:327936 nr_irqs:456 0 [ 0.000000] Console: colour *CGA 80x25 [ 0.000000] console [ttyS1] enabled [ 0.000000] allocated 25165824 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] ODEBUG: 73 of 73 active objects replaced [ 0.000000] hpet clockevent registered [ 0.000000] tsc: Detected 2399.998 MHz processor [ 0.473061] Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) [ 0.476037] pid_max: default: 32768 minimum: 301 [ 0.477999] Security Framework initialized [ 0.479046] SELinux: Initializing. [ 0.480312] SELinux: Starting in permissive mode [ 0.481936] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) [ 0.486564] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.489213] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.491725] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes) [ 0.495416] Initializing cgroup subsys memory [ 0.498724] Initializing cgroup subsys devices [ 0.500591] Initializing cgroup subsys freezer [ 0.502094] Initializing cgroup subsys net_cls [ 0.503831] Initializing cgroup subsys blkio [ 0.505554] Initializing cgroup subsys perf_event [ 0.507720] Initializing cgroup subsys hugetlb [ 0.509097] Initializing cgroup subsys pids [ 0.510377] Initializing cgroup subsys net_prio [ 0.511882] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.515426] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.519239] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.523210] tlb_flushall_shift: 6 [ 0.525694] FEATURE SPEC_CTRL Present [ 0.528301] FEATURE IBPB_SUPPORT Present [ 0.530289] Spectre V2 : Enabling Indirect Branch Prediction Barrier [ 0.534115] Spectre V2 : Vulnerable [ 0.535804] Speculative Store Bypass: Vulnerable [ 0.539249] debug: unmapping init [mem 0xffffffff82019000-0xffffffff8201ffff] [ 0.550215] ACPI: Core revision 20130517 [ 0.554580] ACPI: All ACPI Tables successfully acquired [ 0.556816] ftrace: allocating 30294 entries in 119 pages [ 0.615158] Enabling x2apic [ 0.617106] Enabled x2apic [ 0.619158] Switched APIC routing to physical x2apic. [ 0.622800] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.625225] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz (fam: 06, model: 3e, stepping: 04) [ 0.628573] TSC deadline timer enabled [ 0.628808] Performance Events: IvyBridge events, full-width counters, Intel PMU driver. [ 0.631481] ... version: 2 [ 0.632708] ... bit width: 48 [ 0.634301] ... generic registers: 4 [ 0.635960] ... value mask: 0000ffffffffffff [ 0.637692] ... max period: 00007fffffffffff [ 0.639276] ... fixed-purpose events: 3 [ 0.641584] ... event mask: 000000070000000f [ 0.643548] KVM setup paravirtual spinlock [ 0.649322] kvm-clock: cpu 1, msr 1:3e592041, secondary cpu clock [ 0.653352] KVM setup async PF for cpu 1 [ 0.654958] kvm-stealtime: cpu 1, msr 13e2935c0 [ 0.658011] kvm-clock: cpu 2, msr 1:3e592081, secondary cpu clock [ 0.660893] KVM setup async PF for cpu 2 [ 0.647242] smpboot: Booting Node 0, Processors #1 #2 #3 OK [ 0.661630] kvm-stealtime: cpu 2, msr 13e3135c0 [ 0.661700] kvm-clock: cpu 3, msr 1:3e5920c1, secondary cpu clock [ 0.663646] Brought up 4 CPUs [ 0.663662] KVM setup async PF for cpu 3 [ 0.663669] kvm-stealtime: cpu 3, msr 13e3935c0 [ 0.667310] smpboot: Max logical packages: 1 [ 0.668699] smpboot: Total of 4 processors activated (19199.98 BogoMIPS) [ 0.674985] devtmpfs: initialized [ 0.676329] x86/mm: Memory block size: 128MB [ 0.681394] EVM: security.selinux [ 0.682465] EVM: security.ima [ 0.683400] EVM: security.capability [ 0.687647] atomic64 test passed for x86-64 platform with CX8 and with SSE [ 0.690045] NET: Registered protocol family 16 [ 0.691922] cpuidle: using governor haltpoll [ 0.693916] ACPI: bus type PCI registered [ 0.695116] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.697423] PCI: Using configuration type 1 for base access [ 0.699289] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 0.710618] ACPI: Added _OSI(Module Device) [ 0.711816] ACPI: Added _OSI(Processor Device) [ 0.713136] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.714784] ACPI: Added _OSI(Processor Aggregator Device) [ 0.716283] ACPI: Added _OSI(Linux-Dell-Video) [ 0.718725] ACPI: EC: Look up EC in DSDT [ 0.720946] ACPI: Interpreter enabled [ 0.722277] ACPI: (supports S0 S3 S4 S5) [ 0.723649] ACPI: Using IOAPIC for interrupt routing [ 0.725368] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.728077] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.735286] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.737550] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI] [ 0.739718] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM [ 0.741827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ 0.746074] acpiphp: Slot [2] registered [ 0.747459] acpiphp: Slot [5] registered [ 0.748837] acpiphp: Slot [6] registered [ 0.750201] acpiphp: Slot [3] registered [ 0.751576] acpiphp: Slot [4] registered [ 0.752915] acpiphp: Slot [7] registered [ 0.754284] acpiphp: Slot [8] registered [ 0.755656] acpiphp: Slot [9] registered [ 0.756968] acpiphp: Slot [10] registered [ 0.758239] acpiphp: Slot [11] registered [ 0.759639] acpiphp: Slot [12] registered [ 0.761026] acpiphp: Slot [13] registered [ 0.762340] acpiphp: Slot [14] registered [ 0.763666] acpiphp: Slot [15] registered [ 0.764965] acpiphp: Slot [16] registered [ 0.766352] acpiphp: Slot [17] registered [ 0.767762] acpiphp: Slot [18] registered [ 0.769194] acpiphp: Slot [19] registered [ 0.770437] acpiphp: Slot [20] registered [ 0.772960] acpiphp: Slot [21] registered [ 0.774781] acpiphp: Slot [22] registered [ 0.776046] acpiphp: Slot [23] registered [ 0.777001] acpiphp: Slot [24] registered [ 0.778184] acpiphp: Slot [25] registered [ 0.779403] acpiphp: Slot [26] registered [ 0.780667] acpiphp: Slot [27] registered [ 0.782112] acpiphp: Slot [28] registered [ 0.783247] acpiphp: Slot [29] registered [ 0.784446] acpiphp: Slot [30] registered [ 0.785773] acpiphp: Slot [31] registered [ 0.787090] PCI host bridge to bus 0000:00 [ 0.788472] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.790439] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.792740] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.794960] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [ 0.797018] pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38007fffffff window] [ 0.799076] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.802356] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 [ 0.803123] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 [ 0.804043] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 [ 0.808795] pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] [ 0.810901] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.813766] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.815914] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.818386] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.820841] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 [ 0.821575] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.823975] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.826731] pci 0000:00:02.0: [1af4:1000] type 00 class 0x020000 [ 0.828140] pci 0000:00:02.0: reg 0x10: [io 0xc100-0xc11f] [ 0.835859] pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] [ 0.837189] pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] [ 0.838560] pci 0000:00:05.0: [1af4:1001] type 00 class 0x010000 [ 0.844946] pci 0000:00:05.0: reg 0x10: [io 0xc000-0xc07f] [ 0.848232] pci 0000:00:05.0: reg 0x14: [mem 0xfebc0000-0xfebc0fff] [ 0.856444] pci 0000:00:05.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] [ 0.866321] pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 [ 0.868832] pci 0000:00:06.0: reg 0x10: [io 0xc080-0xc0ff] [ 0.871122] pci 0000:00:06.0: reg 0x14: [mem 0xfebc1000-0xfebc1fff] [ 0.892626] pci 0000:00:06.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] [ 0.901470] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.904377] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.906657] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.909514] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.911133] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.915077] vgaarb: loaded [ 0.916699] SCSI subsystem initialized [ 0.918046] ACPI: bus type USB registered [ 0.919133] usbcore: registered new interface driver usbfs [ 0.920242] usbcore: registered new interface driver hub [ 0.921533] usbcore: registered new device driver usb [ 0.922956] PCI: Using ACPI for IRQ routing [ 0.924311] PCI: pci_cache_line_size set to 64 bytes [ 0.924480] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] [ 0.924486] e820: reserve RAM buffer [mem 0xbffda000-0xbfffffff] [ 0.924488] e820: reserve RAM buffer [mem 0x13ee00000-0x13fffffff] [ 0.924826] NetLabel: Initializing [ 0.926170] NetLabel: domain hash size = 128 [ 0.927635] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.929054] NetLabel: unlabeled traffic allowed by default [ 0.930759] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.932168] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.936797] amd_nb: Cannot enumerate AMD northbridges [ 0.938543] Switched to clocksource kvm-clock [ 0.952695] pnp: PnP ACPI init [ 0.953563] ACPI: bus type PNP registered [ 0.954551] pnp 00:00: Plug and Play ACPI device, IDs PNP0303 (active) [ 0.954633] pnp 00:01: Plug and Play ACPI device, IDs PNP0f13 (active) [ 0.954685] pnp 00:02: [dma 2] [ 0.954714] pnp 00:02: Plug and Play ACPI device, IDs PNP0700 (active) [ 0.954789] pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.954850] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.954912] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.955225] pnp: PnP ACPI: found 6 devices [ 0.956299] ACPI: bus type PNP unregistered [ 0.966645] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] [ 0.966652] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] [ 0.966655] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] [ 0.966657] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] [ 0.966659] pci_bus 0000:00: resource 8 [mem 0x380000000000-0x38007fffffff window] [ 0.966739] NET: Registered protocol family 2 [ 0.968717] TCP established hash table entries: 32768 (order: 6, 262144 bytes) [ 0.971635] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes) [ 0.974475] TCP: Hash tables configured (established 32768 bind 32768) [ 0.976395] TCP: reno registered [ 0.977626] UDP hash table entries: 2048 (order: 5, 196608 bytes) [ 0.979728] UDP-Lite hash table entries: 2048 (order: 5, 196608 bytes) [ 0.982017] NET: Registered protocol family 1 [ 0.984088] RPC: Registered named UNIX socket transport module. [ 0.985897] RPC: Registered udp transport module. [ 0.987269] RPC: Registered tcp transport module. [ 0.988717] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 0.990712] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.992259] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.993706] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.995271] PCI: CLS 0 bytes, default 64 [ 0.995470] Unpacking initramfs... [ 2.207393] debug: unmapping init [mem 0xffff8800bc2f2000-0xffff8800bffcffff] [ 2.209429] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.210554] software IO TLB [mem 0xb82f2000-0xbc2f2000] (64MB) mapped at [ffff8800b82f2000-ffff8800bc2f1fff] [ 2.212386] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 10737418240 ms ovfl timer [ 2.213718] RAPL PMU: hw unit of domain pp0-core 2^-0 Joules [ 2.214712] RAPL PMU: hw unit of domain package 2^-0 Joules [ 2.215583] RAPL PMU: hw unit of domain dram 2^-0 Joules [ 2.217900] cryptomgr_test (52) used greatest stack depth: 14128 bytes left [ 2.218186] futex hash table entries: 1024 (order: 4, 65536 bytes) [ 2.218227] Initialise system trusted keyring [ 2.243308] HugeTLB registered 1 GB page size, pre-allocated 0 pages [ 2.244502] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.248239] zpool: loaded [ 2.248725] zbud: loaded [ 2.249497] VFS: Disk quotas dquot_6.6.0 [ 2.250224] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.251701] NFS: Registering the id_resolver key type [ 2.252614] Key type id_resolver registered [ 2.253357] Key type id_legacy registered [ 2.254022] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 2.255424] Key type big_key registered [ 2.256095] SELinux: Registering netfilter hooks [ 2.257512] cryptomgr_test (58) used greatest stack depth: 13968 bytes left [ 2.263214] cryptomgr_test (63) used greatest stack depth: 13536 bytes left [ 2.263409] NET: Registered protocol family 38 [ 2.263419] Key type asymmetric registered [ 2.263422] Asymmetric key parser 'x509' registered [ 2.263578] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.263689] io scheduler noop registered [ 2.263693] io scheduler deadline registered (default) [ 2.263777] io scheduler cfq registered [ 2.263782] io scheduler mq-deadline registered [ 2.263791] io scheduler kyber registered [ 2.266328] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.266341] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.287015] intel_idle: does not run on family 6 model 62 [ 2.287134] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 2.288267] ACPI: Power Button [PWRF] [ 2.289077] GHES: HEST is not enabled! [ 2.329860] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 [ 2.380091] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 11 [ 2.433925] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 2.464693] 00:03: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.493710] 00:04: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.496830] Non-volatile memory driver v1.3 [ 2.498143] Linux agpgart interface v0.103 [ 2.499680] crash memory driver: version 1.1 [ 2.501587] nbd: registered device at major 43 [ 2.506182] virtio-pci 0000:00:05.0: irq 24 for MSI/MSI-X [ 2.506221] virtio-pci 0000:00:05.0: irq 25 for MSI/MSI-X [ 2.506264] virtio-pci 0000:00:05.0: irq 26 for MSI/MSI-X [ 2.506300] virtio-pci 0000:00:05.0: irq 27 for MSI/MSI-X [ 2.506343] virtio-pci 0000:00:05.0: irq 28 for MSI/MSI-X [ 2.512652] virtio_blk virtio1: [vda] 67256 512-byte logical blocks (34.4 MB/32.8 MiB) [ 2.517949] virtio-pci 0000:00:06.0: irq 29 for MSI/MSI-X [ 2.517996] virtio-pci 0000:00:06.0: irq 30 for MSI/MSI-X [ 2.518040] virtio-pci 0000:00:06.0: irq 31 for MSI/MSI-X [ 2.518075] virtio-pci 0000:00:06.0: irq 32 for MSI/MSI-X [ 2.518110] virtio-pci 0000:00:06.0: irq 33 for MSI/MSI-X [ 2.524099] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB) [ 2.528671] rdac: device handler registered [ 2.530026] hp_sw: device handler registered [ 2.531435] emc: device handler registered [ 2.532729] libphy: Fixed MDIO Bus: probed [ 2.535924] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.538064] ehci-pci: EHCI PCI platform driver [ 2.539396] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.541041] ohci-pci: OHCI PCI platform driver [ 2.542282] uhci_hcd: USB Universal Host Controller Interface driver [ 2.544334] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 2.547848] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.549370] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.551263] mousedev: PS/2 mouse device common for all mice [ 2.553641] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 2.553894] rtc_cmos 00:05: RTC can wake from S4 [ 2.554912] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0 [ 2.555406] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs [ 2.559608] hidraw: raw HID events driver (C) Jiri Kosina [ 2.559865] usbcore: registered new interface driver usbhid [ 2.559866] usbhid: USB HID core driver [ 2.559931] drop_monitor: Initializing network drop monitor service [ 2.559999] Netfilter messages via NETLINK v0.30. [ 2.560073] TCP: cubic registered [ 2.560080] Initializing XFRM netlink socket [ 2.560387] NET: Registered protocol family 10 [ 2.560900] NET: Registered protocol family 17 [ 2.560952] Key type dns_resolver registered [ 2.561408] mce: Using 10 MCE banks [ 2.561767] Loading compiled-in X.509 certificates [ 2.562844] Loaded X.509 cert 'Magrathea: Glacier signing key: e34d0e1b7fcf5b414cce75d36d8482945c781ed6' [ 2.562879] registered taskstats version 1 [ 2.565941] modprobe (71) used greatest stack depth: 13456 bytes left [ 2.568147] Key type trusted registered [ 2.569791] modprobe (77) used greatest stack depth: 13376 bytes left [ 2.572049] Key type encrypted registered [ 2.572104] IMA: No TPM chip found, activating TPM-bypass! (rc=-19) [ 2.573228] BERT: Boot Error Record Table support is disabled. Enable it by using bert_enable as kernel parameter. [ 2.574645] rtc_cmos 00:05: setting system clock to 2024-04-19 12:48:26 UTC (1713530906) [ 2.589550] debug: unmapping init [mem 0xffffffff81da0000-0xffffffff82018fff] [ 2.591033] Write protecting the kernel read-only data: 12288k [ 2.592266] debug: unmapping init [mem 0xffff8800017fe000-0xffff8800017fffff] [ 2.593544] debug: unmapping init [mem 0xffff880001b9b000-0xffff880001bfffff] [ 2.600559] random: systemd: uninitialized urandom read (16 bytes read) [ 2.602848] random: systemd: uninitialized urandom read (16 bytes read) [ 2.604367] random: systemd: uninitialized urandom read (16 bytes read) [ 2.606922] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) [ 2.611199] systemd[1]: Detected virtualization kvm. [ 2.612801] systemd[1]: Detected architecture x86-64. [ 2.614381] systemd[1]: Running in initial RAM disk. [ 2.617946] systemd[1]: No hostname configured. [ 2.618853] systemd[1]: Set hostname to . [ 2.619899] random: systemd: uninitialized urandom read (16 bytes read) [ 2.621870] systemd[1]: Initializing machine ID from random generator. [ 2.652889] dracut-rootfs-g (86) used greatest stack depth: 13264 bytes left [ 2.654814] random: systemd: uninitialized urandom read (16 bytes read) [ 2.656915] random: systemd: uninitialized urandom read (16 bytes read) [ 2.658881] random: systemd: uninitialized urandom read (16 bytes read) [ 2.660830] random: systemd: uninitialized urandom read (16 bytes read) [ 2.664198] random: systemd: uninitialized urandom read (16 bytes read) [ 2.666441] random: systemd: uninitialized urandom read (16 bytes read) [ 2.676756] systemd[1]: Reached target Timers. [ 2.679858] systemd[1]: Created slice Root Slice. [ 2.682313] systemd[1]: Created slice System Slice. [ 2.684575] systemd[1]: Listening on Journal Socket. [ 2.687991] systemd[1]: Starting Load Kernel Modules... [ 2.691059] systemd[1]: Starting Journal Service... [ 2.693956] systemd[1]: Starting Setup Virtual Console... [ 2.697551] systemd[1]: Reached target Local File Systems. [ 2.700204] systemd[1]: Listening on udev Control Socket. [ 2.704598] systemd[1]: Starting dracut cmdline hook... [ 2.706791] systemd[1]: Reached target Swap. [ 2.708762] systemd[1]: Reached target Slices. [ 2.710817] systemd[1]: Listening on udev Kernel Socket. [ 2.713014] systemd[1]: Reached target Sockets. [ 2.716290] systemd[1]: Starting Create list of required static device nodes for the current kernel... [ 2.721364] systemd[1]: Started Journal Service. [ 2.904262] random: fast init done [ 3.214791] libata version 3.00 loaded. [ 3.218991] tsc: Refined TSC clocksource calibration: 2399.958 MHz [ 3.223588] ata_piix 0000:00:01.1: version 2.13 [ 3.227499] scsi host0: ata_piix [ 3.236270] scsi host1: ata_piix [ 3.237723] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 [ 3.239852] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 [ 3.425234] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 [ 3.432416] ip (321) used greatest stack depth: 13080 bytes left [ 3.489357] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 3.490996] ip (344) used greatest stack depth: 12464 bytes left [ 3.538677] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 6.462453] EXT4-fs (nbd0): mounted filesystem with ordered data mode. Opts: (null) [ 6.887311] systemd-journald[101]: Received SIGTERM from PID 1 (systemd). [ 7.101128] SELinux: Disabled at runtime. [ 7.104182] SELinux: Unregistering netfilter hooks [ 7.184605] ip_tables: (C) 2000-2006 Netfilter Core Team [ 7.189277] systemd[1]: Inserted module 'ip_tables' [ 7.660570] systemd-journald[566]: Received request to flush runtime journal from PID 1 [ 7.855867] input: PC Speaker as /devices/platform/pcspkr/input/input3 [ 7.865136] cryptd: max_cpu_qlen set to 1000 [ 7.888668] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [ 7.922415] AVX version of gcm_enc/dec engaged. [ 7.924390] AES CTR mode by8 optimization enabled [ 7.978292] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) [ 7.983511] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) [ 8.006639] Adding 1048572k swap on /dev/vdb. Priority:-2 extents:1 across:1048572k FS [ 8.010147] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 8.152447] EDAC MC: Ver: 3.0.0 [ 8.157964] EDAC sbridge: Seeking for: PCI ID 8086:0ea0 [ 8.157973] EDAC sbridge: Ver: 1.1.2 [ 10.770652] mount.nfs (768) used greatest stack depth: 10704 bytes left [ 21.402647] libcfs: loading out-of-tree module taints kernel. [ 21.404199] libcfs: module verification failed: signature and/or required key missing - tainting kernel [ 21.530347] alg: No test for adler32 (adler32-zlib) [ 22.280702] libcfs: HW NUMA nodes: 1, HW CPU cores: 4, npartitions: 2 [ 22.411797] Lustre: Lustre: Build Version: 2.15.61_227_g2105b10 [ 22.571849] LNet: Added LNI 192.168.201.19@tcp [8/256/0/180] [ 22.573290] LNet: Accept secure, port 988 [ 24.121655] Key type lgssc registered [ 24.372658] Lustre: Echo OBD driver; http://www.lustre.org/ [ 58.182266] Lustre: Mounted lustre-client [ 60.062053] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 72.956883] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing check_logdir /tmp/testlogs/ [ 73.772201] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing yml_node [ 74.735273] Lustre: DEBUG MARKER: Client: 2.15.61.227 [ 75.330485] Lustre: DEBUG MARKER: MDS: 2.15.61.227 [ 75.895898] Lustre: DEBUG MARKER: OSS: 2.15.61.227 [ 76.247323] Lustre: DEBUG MARKER: -----============= acceptance-small: replay-single ============----- Fri Apr 19 08:49:39 EDT 2024 [ 77.536250] Lustre: DEBUG MARKER: excepting tests: 110f 131b 59 36 [ 78.145972] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing check_config_client /mnt/lustre [ 82.299099] Lustre: DEBUG MARKER: Using TIMEOUT=20 [ 83.214803] Lustre: lustre-OST0000-osc-ffff8800b6d86800: disconnect after 24s idle [ 97.873564] Lustre: DEBUG MARKER: == replay-single test 0a: empty replay =================== 08:50:01 (1713531001) [ 98.931104] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 113.264301] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 113.264313] Lustre: lustre-MDT0000-mdc-ffff8800b6d86800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 113.274081] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c5b33 to 0xbcc07654840c7389 [ 113.277855] Lustre: MGC192.168.201.119@tcp: Connection restored to (at 192.168.201.119@tcp) [ 114.556533] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 114.961359] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 119.270595] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531007/real 1713531007] req@ffff88012ff73480 x1796767404296896/t0(0) o400->lustre-MDT0000-mdc-ffff8800b6d86800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531023 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 119.346190] Lustre: DEBUG MARKER: == replay-single test 0b: ensure object created after recover exists. (3284) ========================================================== 08:50:22 (1713531022) [ 123.280746] Lustre: lustre-OST0000-osc-ffff8800b6d86800: Connection to lustre-OST0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 124.278648] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531012/real 1713531012] req@ffff88012ff72a00 x1796767404297152/t0(0) o400->lustre-MDT0000-mdc-ffff8800b6d86800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531028 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 133.634783] Lustre: lustre-OST0000-osc-ffff8800b6d86800: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 133.638367] Lustre: Skipped 1 previous similar message [ 135.342167] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 135.742303] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 140.372842] Lustre: DEBUG MARKER: == replay-single test 0c: check replay-barrier =========== 08:50:43 (1713531043) [ 141.460500] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 141.497638] Lustre: Unmounted lustre-client [ 155.188807] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 160.196100] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 165.204156] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 170.207878] random: crng init done [ 170.211989] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 175.220486] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 185.237710] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 185.241127] LustreError: Skipped 1 previous similar message [ 205.271104] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9881000: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 205.274133] LustreError: Skipped 3 previous similar messages [ 215.291973] Lustre: Mounted lustre-client [ 219.391808] Lustre: DEBUG MARKER: == replay-single test 0d: expired recovery with no clients ========================================================== 08:52:03 (1713531123) [ 220.413976] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 220.440546] Lustre: Unmounted lustre-client [ 238.981306] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation mds_connect to node 192.168.201.119@tcp failed: rc = -16 [ 238.984139] LustreError: Skipped 2 previous similar messages [ 294.075106] Lustre: Mounted lustre-client [ 298.185558] Lustre: DEBUG MARKER: == replay-single test 1: simple create =================== 08:53:21 (1713531201) [ 299.193281] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 313.961567] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation ldlm_enqueue to node 192.168.201.119@tcp failed: rc = -107 [ 313.965595] LustreError: Skipped 10 previous similar messages [ 313.967327] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 313.995049] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection restored to (at 192.168.201.119@tcp) [ 314.112132] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 314.116120] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c84c3 to 0xbcc07654840c8b30 [ 314.561410] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 314.933028] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 319.229403] Lustre: DEBUG MARKER: == replay-single test 2a: touch ========================== 08:53:42 (1713531222) [ 320.118648] Lustre: 1825:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531208/real 1713531208] req@ffff88012b46fb80 x1796767404321664/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531224 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 320.212968] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 325.126590] Lustre: 1825:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531213/real 1713531213] req@ffff88012b46f100 x1796767404321920/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531229 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 334.144394] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 334.144401] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 334.145326] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c8b30 to 0xbcc07654840c8dc9 [ 334.145720] Lustre: MGC192.168.201.119@tcp: Connection restored to (at 192.168.201.119@tcp) [ 334.145722] Lustre: Skipped 1 previous similar message [ 334.170642] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a987c000 x1796767404324736/t21474836484(21474836484) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713531254 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'touch.0' uid:0 gid:0 [ 335.571296] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 335.948463] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 340.026369] Lustre: DEBUG MARKER: == replay-single test 2b: touch ========================== 08:54:03 (1713531243) [ 340.126583] Lustre: 1824:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531228/real 1713531228] req@ffff88012b46ce00 x1796767404325248/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531244 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 340.978901] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 354.175616] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 354.175645] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 354.183857] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a987ed80 x1796767404328768/t25769803781(25769803781) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713531274 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'touch.0' uid:0 gid:0 [ 354.191514] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c8dc9 to 0xbcc07654840c9206 [ 354.195088] Lustre: MGC192.168.201.119@tcp: Connection restored to (at 192.168.201.119@tcp) [ 354.198677] Lustre: Skipped 1 previous similar message [ 356.307145] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 356.686559] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 360.206604] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531248/real 1713531248] req@ffff88012ff73800 x1796767404329280/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531264 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 360.214961] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 360.895792] Lustre: DEBUG MARKER: == replay-single test 2c: setstripe replay =============== 08:54:24 (1713531264) [ 361.894658] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 374.208134] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 374.208187] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 374.216108] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c9206 to 0xbcc07654840c95be [ 374.219462] Lustre: MGC192.168.201.119@tcp: Connection restored to (at 192.168.201.119@tcp) [ 374.221532] Lustre: Skipped 1 previous similar message [ 376.085420] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff88012b46c380 x1796767404332672/t30064771076(30064771076) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 536/608 e 0 to 0 dl 1713531296 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'lfs.0' uid:0 gid:0 [ 377.410569] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 377.810982] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 380.214593] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531268/real 1713531268] req@ffff88012ff73800 x1796767404333312/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531284 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 380.222035] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 1 previous similar message [ 382.057608] Lustre: DEBUG MARKER: == replay-single test 2d: setdirstripe replay ============ 08:54:45 (1713531285) [ 383.014254] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 397.783492] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 397.814729] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 397.818969] Lustre: Skipped 1 previous similar message [ 398.344940] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 398.714853] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 399.248108] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 399.253713] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c95be to 0xbcc07654840c9d51 [ 402.899091] Lustre: DEBUG MARKER: == replay-single test 2e: O_CREAT|O_EXCL create replay === 08:55:06 (1713531306) [ 405.179562] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 419.280188] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 419.284977] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840c9d51 to 0xbcc07654840c9fb2 [ 420.373816] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 420.752268] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 424.864849] Lustre: DEBUG MARKER: == replay-single test 3a: replay failed open(O_DIRECTORY) ========================================================== 08:55:28 (1713531328) [ 425.286600] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531313/real 1713531313] req@ffff88012ff73100 x1796767404340096/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531329 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 425.294320] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 4 previous similar messages [ 425.874464] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 439.312277] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 439.313609] Lustre: MGC192.168.201.119@tcp: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 439.313610] Lustre: Skipped 3 previous similar messages [ 439.320913] Lustre: Skipped 1 previous similar message [ 441.213752] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 441.592325] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 445.691771] Lustre: DEBUG MARKER: == replay-single test 3b: replay failed open -ENOMEM ===== 08:55:49 (1713531349) [ 446.679794] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 461.961678] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation mds_statfs to node 192.168.201.119@tcp failed: rc = -107 [ 461.966534] LustreError: Skipped 1 previous similar message [ 462.500676] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 462.885548] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 464.352142] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 464.356007] LustreError: Skipped 1 previous similar message [ 464.358925] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840ca371 to 0xbcc07654840caa40 [ 464.362660] Lustre: Skipped 1 previous similar message [ 467.034241] Lustre: DEBUG MARKER: == replay-single test 3c: replay failed open -ENOMEM ===== 08:56:10 (1713531370) [ 467.985963] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 483.727814] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 484.119561] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 488.190360] Lustre: DEBUG MARKER: == replay-single test 4a: |x| 10 open(O_CREAT)s ========== 08:56:31 (1713531391) [ 489.199005] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 490.390682] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531378/real 1713531378] req@ffff8800a987f100 x1796767404350336/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531394 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 490.401221] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 7 previous similar messages [ 504.067343] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 504.071068] Lustre: Skipped 2 previous similar messages [ 504.079918] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff88012ff73800 x1796767404353152/t55834574851(55834574851) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713531423 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'bash.0' uid:0 gid:0 [ 504.125618] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 504.129038] Lustre: Skipped 5 previous similar messages [ 504.655206] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 505.012665] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 509.199381] Lustre: DEBUG MARKER: == replay-single test 4b: |x| rm 10 files ================ 08:56:52 (1713531412) [ 510.225806] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 525.440928] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 525.787893] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 529.818589] Lustre: DEBUG MARKER: == replay-single test 5: |x| 220 open(O_CREAT) =========== 08:57:13 (1713531433) [ 530.797066] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 544.479911] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 544.485039] LustreError: Skipped 3 previous similar messages [ 544.487749] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a9a5c380 x1796767404373696/t64424509443(64424509443) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713531464 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'bash.0' uid:0 gid:0 [ 544.495510] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 9 previous similar messages [ 544.498069] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840cbcb5 to 0xbcc07654840cf12e [ 544.501196] Lustre: Skipped 3 previous similar messages [ 547.273693] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 547.646069] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 549.922129] grep (32599) used greatest stack depth: 10512 bytes left [ 557.004369] Lustre: DEBUG MARKER: == replay-single test 6a: mkdir + contained create ======= 08:57:40 (1713531460) [ 558.015697] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 573.456952] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 573.834479] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 580.278014] Lustre: DEBUG MARKER: == replay-single test 6b: |X| rmdir ====================== 08:58:03 (1713531483) [ 581.290839] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 596.982719] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 597.363113] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 601.635642] Lustre: DEBUG MARKER: == replay-single test 7: mkdir |X| contained create ====== 08:58:25 (1713531505) [ 602.607393] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 617.824194] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 618.217638] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 620.590613] Lustre: 1825:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531508/real 1713531508] req@ffff88012ff73480 x1796767404625088/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531524 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 620.597483] Lustre: 1825:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 12 previous similar messages [ 622.478722] Lustre: DEBUG MARKER: == replay-single test 8: creat open |X| close ============ 08:58:46 (1713531526) [ 623.511339] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 638.558581] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 638.562922] Lustre: Skipped 5 previous similar messages [ 638.573353] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff880136984700 x1796767404629184/t81604378634(81604378634) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 664/608 e 0 to 0 dl 1713531558 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'multiop.0' uid:0 gid:0 [ 638.581538] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 219 previous similar messages [ 638.595438] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 638.598574] Lustre: Skipped 11 previous similar messages [ 639.098143] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 639.488090] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 643.970158] Lustre: DEBUG MARKER: == replay-single test 9: |X| create (same inum/gen) ====== 08:59:07 (1713531547) [ 644.923210] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 660.859792] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 661.223460] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 665.460412] Lustre: DEBUG MARKER: == replay-single test 10: create |X| rename unlink ======= 08:59:29 (1713531569) [ 666.435799] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 679.695895] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 679.699692] LustreError: Skipped 5 previous similar messages [ 679.702313] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840d67ef to 0xbcc07654840d6c5d [ 679.705610] Lustre: Skipped 5 previous similar messages [ 681.771257] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 682.129219] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 686.398163] Lustre: DEBUG MARKER: == replay-single test 11: create open write rename |X| create-old-name read ========================================================== 08:59:49 (1713531589) [ 687.545897] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 702.509579] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff880136986300 x1796767404641216/t94489280522(94489280522) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713531622 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'bash.0' uid:0 gid:0 [ 703.087419] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 703.485289] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 707.689993] Lustre: DEBUG MARKER: == replay-single test 12: open, unlink |X| close ========= 09:00:11 (1713531611) [ 708.760826] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 723.671674] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation ldlm_enqueue to node 192.168.201.119@tcp failed: rc = -107 [ 723.675577] LustreError: Skipped 6 previous similar messages [ 724.320363] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 724.687814] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 728.918856] Lustre: DEBUG MARKER: == replay-single test 13: open chmod 0 |x| write close === 09:00:32 (1713531632) [ 730.038519] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 744.805677] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a9bbf480 x1796767404641600/t94489280524(94489280524) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713531664 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'grep.0' uid:0 gid:0 [ 744.812359] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 3 previous similar messages [ 745.414763] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 745.794362] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 750.054368] Lustre: DEBUG MARKER: == replay-single test 14: open(O_CREAT), unlink |X| close ========================================================== 09:00:53 (1713531653) [ 751.103429] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 766.668113] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 767.035408] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 771.233434] Lustre: DEBUG MARKER: == replay-single test 15: open(O_CREAT), unlink |X| touch new, close ========================================================== 09:01:14 (1713531674) [ 772.270607] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 787.831143] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 788.215203] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 792.413350] Lustre: DEBUG MARKER: == replay-single test 16: |X| open(O_CREAT), unlink, touch new, unlink new ========================================================== 09:01:36 (1713531696) [ 793.455341] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 808.750040] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 809.133885] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 813.348149] Lustre: DEBUG MARKER: == replay-single test 17: |X| open(O_CREAT), |replay| close ========================================================== 09:01:56 (1713531716) [ 814.359886] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 829.104049] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a9bbf480 x1796767404641600/t94489280524(94489280524) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713531749 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'grep.0' uid:0 gid:0 [ 829.111291] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 7 previous similar messages [ 829.694447] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 830.057385] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 834.293466] Lustre: DEBUG MARKER: == replay-single test 18: open(O_CREAT), unlink, touch new, close, touch, unlink ========================================================== 09:02:17 (1713531737) [ 835.320411] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 850.748941] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 851.128334] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 855.270942] Lustre: DEBUG MARKER: == replay-single test 19: mcreate, open, write, rename === 09:02:38 (1713531758) [ 856.243898] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 871.760296] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 872.144657] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 876.709649] Lustre: DEBUG MARKER: == replay-single test 20a: |X| open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 09:03:00 (1713531780) [ 877.923605] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 881.014740] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713531768/real 1713531768] req@ffff8800a9bbe300 x1796767404678912/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713531784 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 881.030155] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 30 previous similar messages [ 893.465545] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 893.845112] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 895.041134] Lustre: MGC192.168.201.119@tcp: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 895.044361] Lustre: Skipped 24 previous similar messages [ 897.975920] Lustre: DEBUG MARKER: == replay-single test 20b: write, unlink, eviction, replay (test mds_cleanup_orphans) ========================================================== 09:03:21 (1713531801) [ 899.917352] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 899.923067] Lustre: Skipped 12 previous similar messages [ 899.925747] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 899.930018] LustreError: 26427:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8800a9887800: inode [0x200001b71:0x132:0x0] mdc close failed: rc = -5 [ 899.933702] LustreError: 26427:0:(file.c:264:ll_close_inode_openhandle()) Skipped 1 previous similar message [ 915.946778] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 916.503045] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 924.857835] Lustre: DEBUG MARKER: before 8192, after 8192 [ 928.864244] Lustre: DEBUG MARKER: == replay-single test 20c: check that client eviction does not affect file content ========================================================== 09:03:52 (1713531832) [ 930.099248] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 934.303012] Lustre: DEBUG MARKER: == replay-single test 21: |X| open(O_CREAT), unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 09:03:57 (1713531837) [ 935.356666] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 950.128416] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 950.134542] LustreError: Skipped 11 previous similar messages [ 950.138631] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654840da3d1 to 0xbcc07654840da8e0 [ 950.141481] Lustre: Skipped 11 previous similar messages [ 950.825946] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 951.174594] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 955.968293] Lustre: DEBUG MARKER: == replay-single test 22: open(O_CREAT), |X| unlink, replay, close (test mds_cleanup_orphans) ========================================================== 09:04:19 (1713531859) [ 957.188837] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 971.353353] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a9874380 x1796767404701568/t146028888070(146028888070) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713531891 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'multiop.0' uid:0 gid:0 [ 971.370110] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 10 previous similar messages [ 973.934508] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 974.443571] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 979.701951] Lustre: DEBUG MARKER: == replay-single test 23: open(O_CREAT), |X| unlink touch new, replay, close (test mds_cleanup_orphans) ========================================================== 09:04:43 (1713531883) [ 981.107590] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 996.776281] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 997.135231] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1002.366158] Lustre: DEBUG MARKER: == replay-single test 24: open(O_CREAT), replay, unlink, close (test mds_cleanup_orphans) ========================================================== 09:05:05 (1713531905) [ 1003.422286] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1019.971535] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1020.399571] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1025.061163] Lustre: DEBUG MARKER: == replay-single test 25: open(O_CREAT), unlink, replay, close (test mds_cleanup_orphans) ========================================================== 09:05:28 (1713531928) [ 1026.512830] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1043.069436] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1043.655106] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1048.688175] Lustre: DEBUG MARKER: == replay-single test 26: |X| open(O_CREAT), unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 09:05:52 (1713531952) [ 1049.695953] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1066.949475] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1067.321345] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1072.708755] Lustre: DEBUG MARKER: == replay-single test 27: |X| open(O_CREAT), unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 09:06:16 (1713531976) [ 1073.942858] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1089.759832] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1090.125129] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1094.437506] Lustre: DEBUG MARKER: == replay-single test 28: open(O_CREAT), |X| unlink two, close one, replay, close one (test mds_cleanup_orphans) ========================================================== 09:06:37 (1713531997) [ 1095.471967] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1112.151205] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1112.715158] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1117.982974] Lustre: DEBUG MARKER: == replay-single test 29: open(O_CREAT), |X| unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 09:07:01 (1713532021) [ 1119.315905] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1136.489139] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1137.052376] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1141.942185] Lustre: DEBUG MARKER: == replay-single test 30: open(O_CREAT) two, unlink two, replay, close two (test mds_cleanup_orphans) ========================================================== 09:07:25 (1713532045) [ 1142.963901] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1158.162026] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1158.487231] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1164.088404] Lustre: DEBUG MARKER: == replay-single test 31: open(O_CREAT) two, unlink one, |X| unlink one, close two (test mds_cleanup_orphans) ========================================================== 09:07:47 (1713532067) [ 1165.571443] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1182.696182] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1183.260977] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1188.696555] Lustre: DEBUG MARKER: == replay-single test 32: close() notices client eviction; close() after client eviction ========================================================== 09:08:12 (1713532092) [ 1190.432306] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1196.411654] Lustre: DEBUG MARKER: == replay-single test 33a: fid seq shouldn't be reused after abort recovery ========================================================== 09:08:19 (1713532099) [ 1197.562623] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1204.550279] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1204.558085] LustreError: 15485:0:(file.c:5542:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -5 [ 1210.397860] Lustre: DEBUG MARKER: == replay-single test 33b: test fid seq allocation ======= 09:08:33 (1713532113) [ 1211.741672] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1215.568376] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1224.247689] Lustre: DEBUG MARKER: == replay-single test 34: abort recovery before client does replay (test mds_cleanup_orphans) ========================================================== 09:08:47 (1713532127) [ 1225.791123] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1230.589886] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1237.639272] Lustre: DEBUG MARKER: == replay-single test 35: test recovery from llog for unlink op ========================================================== 09:09:01 (1713532141) [ 1251.288717] Lustre: DEBUG MARKER: SKIP: replay-single test_36 skipping ALWAYS excluded test 36 [ 1253.333759] Lustre: DEBUG MARKER: == replay-single test 37: abort recovery before client does replay (test mds_cleanup_orphans for directories) ========================================================== 09:09:16 (1713532156) [ 1254.833358] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1260.636930] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1260.643803] LustreError: Skipped 1 previous similar message [ 1267.909289] Lustre: DEBUG MARKER: == replay-single test 38: test recovery from unlink llog (test llog_gen_rec) ========================================================== 09:09:31 (1713532171) [ 1278.149217] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1294.850852] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation ldlm_enqueue to node 192.168.201.119@tcp failed: rc = -107 [ 1294.856566] LustreError: Skipped 8 previous similar messages [ 1295.725873] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1296.294795] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1307.116343] Lustre: DEBUG MARKER: == replay-single test 39: test recovery from unlink llog (test llog_gen_rec) ========================================================== 09:10:10 (1713532210) [ 1313.996991] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1334.247048] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1334.828463] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1346.234411] Lustre: DEBUG MARKER: == replay-single test 41: read from a valid osc while other oscs are invalid ========================================================== 09:10:49 (1713532249) [ 1351.592272] Lustre: DEBUG MARKER: == replay-single test 42: recovery after ost failure ===== 09:10:55 (1713532255) [ 1358.616095] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 1425.030855] Lustre: DEBUG MARKER: == replay-single test 43: mds osc import failure during recovery; don't LBUG ========================================================== 09:12:08 (1713532328) [ 1426.539564] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1440.912591] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 1440.921017] Lustre: Skipped 22 previous similar messages [ 1440.924653] Lustre: MGC192.168.201.119@tcp: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 1440.929479] Lustre: Skipped 42 previous similar messages [ 1443.953700] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1444.526255] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1446.948655] Lustre: 1824:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713532334/real 1713532334] req@ffff8800a9bbc380 x1796767406084160/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713532350 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 1446.964006] Lustre: 1824:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 32 previous similar messages [ 1460.285202] Lustre: DEBUG MARKER: == replay-single test 44a: race in target handle connect ========================================================== 09:12:43 (1713532363) [ 1521.077036] Lustre: DEBUG MARKER: == replay-single test 44b: race in target handle connect ========================================================== 09:13:44 (1713532424) [ 1582.310785] LustreError: 30833:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1603.016763] LustreError: 30868:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1623.684343] LustreError: 30905:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1644.362637] LustreError: 30940:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1665.059793] LustreError: 30976:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1685.758159] LustreError: 31011:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1706.315085] LustreError: 31047:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1747.536624] LustreError: 31118:0:(lmv_obd.c:1447:lmv_statfs()) lustre-MDT0000-mdc-ffff8800a9887800: can't stat MDS #0: rc = -114 [ 1747.541928] LustreError: 31118:0:(lmv_obd.c:1447:lmv_statfs()) Skipped 1 previous similar message [ 1751.975005] Lustre: DEBUG MARKER: == replay-single test 44c: race in target handle connect ========================================================== 09:17:35 (1713532655) [ 1753.446369] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1757.473027] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 1757.479052] LustreError: Skipped 18 previous similar messages [ 1757.483380] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc0765484122009 to 0xbcc0765484122e10 [ 1757.491781] Lustre: Skipped 18 previous similar messages [ 1758.524452] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a9887800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 1777.640800] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1778.038585] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1782.834157] Lustre: DEBUG MARKER: == replay-single test 45: Handle failed close ============ 09:18:06 (1713532686) [ 1782.974569] Lustre: setting import lustre-MDT0000_UUID INACTIVE by administrator request [ 1782.979316] LustreError: 1397:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8800a9887800: inode [0x20001a9e1:0x1:0x0] mdc close failed: rc = -108 [ 1787.344240] Lustre: DEBUG MARKER: == replay-single test 46: Don't leak file handle after open resend (3325) ========================================================== 09:18:10 (1713532690) [ 1819.829887] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 1820.231772] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 1826.395608] Lustre: DEBUG MARKER: == replay-single test 47: MDS->OSC failure during precreate cleanup (2824) ========================================================== 09:18:49 (1713532729) [ 1843.193758] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 1843.571584] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 1911.018804] Lustre: DEBUG MARKER: == replay-single test 48: MDS->OSC failure during precreate cleanup (2824) ========================================================== 09:20:14 (1713532814) [ 1912.495068] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 1927.867785] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a8ba8a80 x1796767406151104/t236223201383(236223201383) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713532847 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'createmany.0' uid:0 gid:0 [ 1927.880788] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 17 previous similar messages [ 1992.333834] Lustre: DEBUG MARKER: == replay-single test 50: Double OSC recovery, don't LASSERT (3812) ========================================================== 09:21:35 (1713532895) [ 2002.275013] Lustre: DEBUG MARKER: == replay-single test 52: time out lock replay (3764) ==== 09:21:45 (1713532905) [ 2034.688189] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2035.218321] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2040.955850] Lustre: DEBUG MARKER: == replay-single test 53a: |X| close request while two MDC requests in flight ========================================================== 09:22:24 (1713532944) [ 2043.831723] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2058.065145] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 2058.067754] Lustre: MGC192.168.201.119@tcp: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 2058.067756] Lustre: Skipped 26 previous similar messages [ 2058.082886] Lustre: Skipped 21 previous similar messages [ 2061.069875] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2061.636901] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2064.115622] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713532951/real 1713532951] req@ffff8800a8759500 x1796767406183744/t0(0) o400->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 224/224 e 0 to 1 dl 1713532967 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 [ 2064.129990] Lustre: 1826:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 9 previous similar messages [ 2067.336405] Lustre: DEBUG MARKER: == replay-single test 53b: |X| open request while two MDC requests in flight ========================================================== 09:22:50 (1713532970) [ 2070.073423] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2085.237652] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2085.622570] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2091.316613] Lustre: DEBUG MARKER: == replay-single test 53c: |X| open request and close request while two MDC requests in flight ========================================================== 09:23:14 (1713532994) [ 2094.092479] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2116.090894] Lustre: DEBUG MARKER: == replay-single test 53d: close reply while two MDC requests in flight ========================================================== 09:23:39 (1713533019) [ 2133.493058] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2133.892399] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2138.385894] Lustre: DEBUG MARKER: == replay-single test 53e: |X| open reply while two MDC requests in flight ========================================================== 09:24:01 (1713533041) [ 2141.107553] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2157.089338] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2157.641927] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2163.011340] Lustre: DEBUG MARKER: == replay-single test 53f: |X| open reply and close reply while two MDC requests in flight ========================================================== 09:24:26 (1713533066) [ 2165.757432] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2185.631988] Lustre: DEBUG MARKER: == replay-single test 53g: |X| drop open reply and close request while close and open are both in flight ========================================================== 09:24:49 (1713533089) [ 2188.562408] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2207.779343] Lustre: DEBUG MARKER: == replay-single test 53h: open request and close reply while two MDC requests in flight ========================================================== 09:25:11 (1713533111) [ 2211.449271] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2238.331131] Lustre: DEBUG MARKER: == replay-single test 55: let MDS_CHECK_RESENT return the original return code instead of 0 ========================================================== 09:25:41 (1713533141) [ 2258.178891] Lustre: DEBUG MARKER: == replay-single test 56: don't replay a symlink open request (3440) ========================================================== 09:26:01 (1713533161) [ 2259.306993] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2275.558792] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2276.062005] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2291.016408] Lustre: DEBUG MARKER: == replay-single test 57: test recovery from llog for setattr op ========================================================== 09:26:34 (1713533194) [ 2292.533314] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2308.375308] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation ldlm_enqueue to node 192.168.201.119@tcp failed: rc = -107 [ 2308.379221] LustreError: Skipped 1 previous similar message [ 2308.957928] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2309.297533] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2317.975387] Lustre: DEBUG MARKER: == replay-single test 58a: test recovery from llog for setattr op (test llog_gen_rec) ========================================================== 09:27:01 (1713533221) [ 2327.983404] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2344.452856] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2344.986107] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2367.867100] Lustre: DEBUG MARKER: == replay-single test 58b: test replay of setxattr op ==== 09:27:51 (1713533271) [ 2367.963113] Lustre: Mounted lustre-client [ 2369.219836] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2382.992388] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 2382.996717] LustreError: Skipped 15 previous similar messages [ 2382.999975] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc076548413f104 to 0xbcc0765484169ff2 [ 2383.006388] Lustre: Skipped 15 previous similar messages [ 2385.318752] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2385.824726] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2387.363641] Lustre: Unmounted lustre-client [ 2388.123280] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount FULL mgc.*.mgs_server_uuid [ 2388.614558] Lustre: DEBUG MARKER: mgc.*.mgs_server_uuid in FULL state after 0 sec [ 2392.268330] Lustre: DEBUG MARKER: == replay-single test 58c: resend/reconstruct setxattr op ========================================================== 09:28:15 (1713533295) [ 2397.472523] Lustre: Mounted lustre-client [ 2431.114574] Lustre: Unmounted lustre-client [ 2433.835857] Lustre: DEBUG MARKER: SKIP: replay-single test_59 skipping ALWAYS excluded test 59 [ 2435.780682] Lustre: DEBUG MARKER: == replay-single test 60: test llog post recovery init vs llog unlink ========================================================== 09:28:59 (1713533339) [ 2439.311650] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2456.659305] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2457.258199] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2462.740842] Lustre: DEBUG MARKER: == replay-single test 61a: test race llog recovery vs llog cleanup ========================================================== 09:29:26 (1713533366) [ 2467.504748] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 2510.671646] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2511.098656] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2546.711274] Lustre: DEBUG MARKER: == replay-single test 61b: test race mds llog sync vs llog cleanup ========================================================== 09:30:50 (1713533450) [ 2587.552978] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2587.954909] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2592.467333] Lustre: DEBUG MARKER: == replay-single test 61c: test race mds llog sync vs llog cleanup ========================================================== 09:31:36 (1713533496) [ 2618.849053] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 2619.350192] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 2624.903941] Lustre: DEBUG MARKER: == replay-single test 61d: error in llog_setup should cleanup the llog context correctly ========================================================== 09:32:08 (1713533528) [ 2635.178610] Lustre: DEBUG MARKER: == replay-single test 62: don't mis-drop resent replay === 09:32:18 (1713533538) [ 2636.459074] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 2667.025635] Lustre: 1823:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713533554/real 1713533554] req@ffff8800a8e9e300 x1796767408112896/t313532612612(313532612612) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/664 e 0 to 1 dl 1713533570 ref 2 fl Rpc:XQr/204/ffffffff rc 0/-1 job:'createmany.0' uid:0 gid:0 [ 2667.036367] Lustre: 1823:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 44 previous similar messages [ 2667.039547] LustreError: 1823:0:(client.c:3243:ptlrpc_replay_interpret()) @@@ request replay timed out req@ffff8800a8e9e300 x1796767408112896/t313532612612(313532612612) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/664 e 0 to 1 dl 1713533570 ref 2 fl Interpret:EXQU/204/ffffffff rc -110/-1 job:'createmany.0' uid:0 gid:0 [ 2667.068853] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a8e9e300 x1796767408112896/t313532612612(313532612612) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713533586 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'createmany.0' uid:0 gid:0 [ 2667.077066] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 19 previous similar messages [ 2667.171611] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 2667.174713] Lustre: Skipped 37 previous similar messages [ 2667.728109] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 2668.086931] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 2673.324955] Lustre: DEBUG MARKER: == replay-single test 65a: AT: verify early replies ====== 09:32:56 (1713533576) [ 2720.394937] Lustre: DEBUG MARKER: == replay-single test 65b: AT: verify early replies on packed reply / bulk ========================================================== 09:33:43 (1713533623) [ 2759.212648] Lustre: DEBUG MARKER: == replay-single test 66a: AT: verify MDT service time adjusts with no early replies ========================================================== 09:34:22 (1713533662) [ 2811.184966] Lustre: DEBUG MARKER: == replay-single test 66b: AT: verify net latency adjusts ========================================================== 09:35:14 (1713533714) [ 2833.173771] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c sleeping for 10000ms [ 2843.178620] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c awake [ 2843.187390] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c sleeping for 10000ms [ 2853.192630] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c awake [ 2853.199433] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c sleeping for 10000ms [ 2863.203681] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c awake [ 2863.207824] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c sleeping for 10000ms [ 2873.209641] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c awake [ 2873.216745] LustreError: 1824:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c sleeping for 10000ms [ 2883.290684] LustreError: 1824:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c awake [ 2883.295775] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c sleeping for 10000ms [ 2893.298614] LustreError: 9795:0:(client.c:1554:after_reply()) cfs_fail_timeout id 50c awake [ 2897.118143] Lustre: DEBUG MARKER: == replay-single test 67a: AT: verify slow request processing doesn't induce reconnects ========================================================== 09:36:40 (1713533800) [ 2972.779375] Lustre: DEBUG MARKER: == replay-single test 67b: AT: verify instant slowdown doesn't induce reconnects ========================================================== 09:37:56 (1713533876) [ 2997.094560] Lustre: DEBUG MARKER: phase 2 [ 3001.839438] Lustre: DEBUG MARKER: == replay-single test 68: AT: verify slowing locks ======= 09:38:25 (1713533905) [ 3025.002757] LustreError: 31445:0:(ldlm_request.c:1412:ldlm_cli_cancel_req()) cfs_fail_timeout id 312 sleeping for 19000ms [ 3044.008651] LustreError: 31445:0:(ldlm_request.c:1412:ldlm_cli_cancel_req()) cfs_fail_timeout id 312 awake [ 3073.847408] Lustre: DEBUG MARKER: == replay-single test 70a: check multi client t-f ======== 09:39:37 (1713533977) [ 3074.404707] Lustre: DEBUG MARKER: SKIP: replay-single test_70a Need two or more clients, have 1 [ 3077.110849] Lustre: DEBUG MARKER: == replay-single test 70b: dbench 1mdts recovery; 1 clients ========================================================== 09:39:40 (1713533980) [ 3078.875926] Lustre: DEBUG MARKER: Started rundbench load pid=14013 ... [ 3081.364188] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3082.951022] Lustre: DEBUG MARKER: test_70b fail mds1 1 times [ 3083.676145] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation mds_sync to node 192.168.201.119@tcp failed: rc = -19 [ 3083.682747] LustreError: Skipped 2 previous similar messages [ 3083.686898] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 3083.697209] Lustre: Skipped 23 previous similar messages [ 3096.736638] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 3096.744243] LustreError: Skipped 5 previous similar messages [ 3096.748967] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc076548417e127 to 0xbcc0765484186dc8 [ 3096.756126] Lustre: Skipped 5 previous similar messages [ 3106.775041] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a6f3c380 x1796767408214336/t317827580297(317827580297) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713534026 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'dbench.0' uid:0 gid:0 [ 3106.788159] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 24 previous similar messages [ 3108.091183] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3108.685386] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3112.423897] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3113.980234] Lustre: DEBUG MARKER: test_70b fail mds1 2 times [ 3131.793257] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a6f3c380 x1796767408214336/t317827580297(317827580297) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713534051 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'dbench.0' uid:0 gid:0 [ 3131.812227] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 74 previous similar messages [ 3133.117092] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3133.681136] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3137.480665] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3139.061810] Lustre: DEBUG MARKER: test_70b fail mds1 3 times [ 3156.817378] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a6f3c380 x1796767408214336/t317827580297(317827580297) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713534076 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'dbench.0' uid:0 gid:0 [ 3156.830436] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 91 previous similar messages [ 3157.968908] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3158.534117] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3162.276134] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3163.821268] Lustre: DEBUG MARKER: test_70b fail mds1 4 times [ 3183.199368] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3183.752204] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3187.542479] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3189.116735] Lustre: DEBUG MARKER: test_70b fail mds1 5 times [ 3206.865808] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a6f3c380 x1796767408214336/t317827580297(317827580297) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713534126 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'dbench.0' uid:0 gid:0 [ 3206.879932] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 212 previous similar messages [ 3208.179453] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3208.771024] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3212.593995] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3214.178128] Lustre: DEBUG MARKER: test_70b fail mds1 6 times [ 3233.326094] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3233.907981] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3237.805841] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3239.375222] Lustre: DEBUG MARKER: test_70b fail mds1 7 times [ 3258.361874] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3258.962762] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3262.785625] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3264.361605] Lustre: DEBUG MARKER: test_70b fail mds1 8 times [ 3281.932087] Lustre: MGC192.168.201.119@tcp: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 3281.937512] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a6f3c380 x1796767408214336/t317827580297(317827580297) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713534201 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'dbench.0' uid:0 gid:0 [ 3281.937521] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 346 previous similar messages [ 3281.958906] Lustre: Skipped 14 previous similar messages [ 3283.276851] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3283.848642] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3287.678586] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3289.266420] Lustre: DEBUG MARKER: test_70b fail mds1 9 times [ 3308.403130] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3308.996233] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3312.780306] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3314.318721] Lustre: DEBUG MARKER: test_70b fail mds1 10 times [ 3333.374219] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3333.947619] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3337.849137] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3339.388940] Lustre: DEBUG MARKER: test_70b fail mds1 11 times [ 3358.471125] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3359.056257] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3363.000680] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3364.556488] Lustre: DEBUG MARKER: test_70b fail mds1 12 times [ 3383.394071] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3383.953753] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3443.433183] Lustre: DEBUG MARKER: == replay-single test 70c: tar 1mdts recovery ============ 09:45:46 (1713534346) [ 3565.016547] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3575.542123] Lustre: DEBUG MARKER: test_70c fail mds1 1 times [ 3592.371270] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a7992680 x1796767417702912/t369367213552(369367213552) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 576/608 e 0 to 0 dl 1713534512 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'tar.0' uid:0 gid:0 [ 3592.380802] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 610 previous similar messages [ 3596.793490] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3597.332922] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3720.129236] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 3730.679284] Lustre: DEBUG MARKER: test_70c fail mds1 2 times [ 3731.397911] LustreError: 11-0: lustre-MDT0000-mdc-ffff8800a9887800: operation mds_reint to node 192.168.201.119@tcp failed: rc = -19 [ 3731.403537] LustreError: Skipped 12 previous similar messages [ 3731.406263] Lustre: lustre-MDT0000-mdc-ffff8800a9887800: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 3731.414631] Lustre: Skipped 12 previous similar messages [ 3747.576865] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 3747.583000] LustreError: Skipped 12 previous similar messages [ 3747.587034] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc07654842e67fa to 0xbcc07654843bdd89 [ 3747.592853] Lustre: Skipped 12 previous similar messages [ 3752.217983] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 3752.801352] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 3817.479452] Lustre: DEBUG MARKER: == replay-single test 70d: mkdir/rmdir striped dir 1mdts recovery ========================================================== 09:52:00 (1713534720) [ 3818.014335] Lustre: DEBUG MARKER: SKIP: replay-single test_70d needs >= 2 MDTs [ 3820.808298] Lustre: DEBUG MARKER: == replay-single test 70e: rename cross-MDT with random fails ========================================================== 09:52:04 (1713534724) [ 3821.383371] Lustre: DEBUG MARKER: SKIP: replay-single test_70e needs >= 2 MDTs [ 3824.218237] Lustre: DEBUG MARKER: == replay-single test 70f: OSS O_DIRECT recovery with 1 clients ========================================================== 09:52:07 (1713534727) [ 3828.962458] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 3830.517981] Lustre: DEBUG MARKER: test_70f failing OST 1 times [ 3844.734682] Lustre: 1827:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713534732/real 1713534732] req@ffff8800aa185880 x1796767426766144/t0(0) o4->lustre-OST0000-osc-ffff8800a9887800@192.168.201.119@tcp:6/4 lens 488/448 e 0 to 1 dl 1713534748 ref 2 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'dd.0' uid:0 gid:0 [ 3848.244257] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3848.783012] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3856.585685] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 3858.166624] Lustre: DEBUG MARKER: test_70f failing OST 2 times [ 3875.845341] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3876.388686] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3884.147928] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 3885.723157] Lustre: DEBUG MARKER: test_70f failing OST 3 times [ 3900.487128] Lustre: lustre-OST0000-osc-ffff8800a9887800: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 3900.492113] Lustre: Skipped 15 previous similar messages [ 3903.206039] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3903.657065] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3911.458364] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 3913.009292] Lustre: DEBUG MARKER: test_70f failing OST 4 times [ 3930.592251] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3931.110129] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3938.828175] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 3940.380175] Lustre: DEBUG MARKER: test_70f failing OST 5 times [ 3957.989170] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 3958.429715] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 3966.748759] Lustre: DEBUG MARKER: == replay-single test 71a: mkdir/rmdir striped dir with 2 mdts recovery ========================================================== 09:54:30 (1713534870) [ 3967.299926] Lustre: DEBUG MARKER: SKIP: replay-single test_71a needs >= 2 MDTs [ 3970.090876] Lustre: DEBUG MARKER: == replay-single test 73a: open(O_CREAT), unlink, replay, reconnect before open replay, close ========================================================== 09:54:33 (1713534873) [ 3971.646655] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4002.982743] LustreError: 1823:0:(client.c:3243:ptlrpc_replay_interpret()) @@@ request replay timed out req@ffff8800a8481880 x1796767427261376/t377957133607(377957133607) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/664 e 0 to 1 dl 1713534906 ref 2 fl Interpret:EXPQU/204/ffffffff rc -110/-1 job:'multiop.0' uid:0 gid:0 [ 4003.020297] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a8481880 x1796767427261376/t377957133607(377957133607) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/608 e 0 to 0 dl 1713534922 ref 2 fl Interpret:RPQU/204/0 rc 301/301 job:'multiop.0' uid:0 gid:0 [ 4003.041746] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 466 previous similar messages [ 4003.956636] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4004.526041] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4010.456680] Lustre: DEBUG MARKER: == replay-single test 73b: open(O_CREAT), unlink, replay, reconnect at open_replay reply, close ========================================================== 09:55:13 (1713534913) [ 4011.907667] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4043.046724] LustreError: 1823:0:(client.c:3243:ptlrpc_replay_interpret()) @@@ request replay timed out req@ffff8800a9a55c00 x1796767427266112/t382252089347(382252089347) o101->lustre-MDT0000-mdc-ffff8800a9887800@192.168.201.119@tcp:12/10 lens 592/664 e 0 to 1 dl 1713534946 ref 2 fl Interpret:EXPQU/204/ffffffff rc -110/-1 job:'multiop.0' uid:0 gid:0 [ 4043.913817] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4044.464622] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4050.173101] Lustre: DEBUG MARKER: == replay-single test 74: Ensure applications don't fail waiting for OST recovery ========================================================== 09:55:53 (1713534953) [ 4050.371582] Lustre: Unmounted lustre-client [ 4067.318100] Lustre: Mounted lustre-client [ 4078.201833] Lustre: DEBUG MARKER: == replay-single test 80a: DNE: create remote dir, drop update rep from MDT0, fail MDT0 ========================================================== 09:56:21 (1713534981) [ 4078.745141] Lustre: DEBUG MARKER: SKIP: replay-single test_80a needs >= 2 MDTs [ 4081.615326] Lustre: DEBUG MARKER: == replay-single test 80b: DNE: create remote dir, drop update rep from MDT0, fail MDT1 ========================================================== 09:56:24 (1713534984) [ 4082.188072] Lustre: DEBUG MARKER: SKIP: replay-single test_80b needs >= 2 MDTs [ 4085.000911] Lustre: DEBUG MARKER: == replay-single test 80c: DNE: create remote dir, drop update rep from MDT1, fail MDT[0,1] ========================================================== 09:56:28 (1713534988) [ 4085.569482] Lustre: DEBUG MARKER: SKIP: replay-single test_80c needs >= 2 MDTs [ 4088.388954] Lustre: DEBUG MARKER: == replay-single test 80d: DNE: create remote dir, drop update rep from MDT1, fail 2 MDTs ========================================================== 09:56:31 (1713534991) [ 4088.948575] Lustre: DEBUG MARKER: SKIP: replay-single test_80d needs >= 2 MDTs [ 4091.744188] Lustre: DEBUG MARKER: == replay-single test 80e: DNE: create remote dir, drop MDT1 rep, fail MDT0 ========================================================== 09:56:35 (1713534995) [ 4092.308068] Lustre: DEBUG MARKER: SKIP: replay-single test_80e needs >= 2 MDTs [ 4095.118180] Lustre: DEBUG MARKER: == replay-single test 80f: DNE: create remote dir, drop MDT1 rep, fail MDT1 ========================================================== 09:56:38 (1713534998) [ 4095.689992] Lustre: DEBUG MARKER: SKIP: replay-single test_80f needs >= 2 MDTs [ 4098.519150] Lustre: DEBUG MARKER: == replay-single test 80g: DNE: create remote dir, drop MDT1 rep, fail MDT0, then MDT1 ========================================================== 09:56:41 (1713535001) [ 4099.094202] Lustre: DEBUG MARKER: SKIP: replay-single test_80g needs >= 2 MDTs [ 4101.890034] Lustre: DEBUG MARKER: == replay-single test 80h: DNE: create remote dir, drop MDT1 rep, fail 2 MDTs ========================================================== 09:56:45 (1713535005) [ 4102.448846] Lustre: DEBUG MARKER: SKIP: replay-single test_80h needs >= 2 MDTs [ 4105.333123] Lustre: DEBUG MARKER: == replay-single test 81a: DNE: unlink remote dir, drop MDT0 update rep, fail MDT1 ========================================================== 09:56:48 (1713535008) [ 4105.891138] Lustre: DEBUG MARKER: SKIP: replay-single test_81a needs >= 2 MDTs [ 4108.724718] Lustre: DEBUG MARKER: == replay-single test 81b: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0 ========================================================== 09:56:52 (1713535012) [ 4109.302493] Lustre: DEBUG MARKER: SKIP: replay-single test_81b needs >= 2 MDTs [ 4112.129182] Lustre: DEBUG MARKER: == replay-single test 81c: DNE: unlink remote dir, drop MDT0 update reply, fail MDT0,MDT1 ========================================================== 09:56:55 (1713535015) [ 4112.672280] Lustre: DEBUG MARKER: SKIP: replay-single test_81c needs >= 2 MDTs [ 4115.500683] Lustre: DEBUG MARKER: == replay-single test 81d: DNE: unlink remote dir, drop MDT0 update reply, fail 2 MDTs ========================================================== 09:56:58 (1713535018) [ 4116.064321] Lustre: DEBUG MARKER: SKIP: replay-single test_81d needs >= 2 MDTs [ 4118.864309] Lustre: DEBUG MARKER: == replay-single test 81e: DNE: unlink remote dir, drop MDT1 req reply, fail MDT0 ========================================================== 09:57:02 (1713535022) [ 4119.445585] Lustre: DEBUG MARKER: SKIP: replay-single test_81e needs >= 2 MDTs [ 4122.327279] Lustre: DEBUG MARKER: == replay-single test 81f: DNE: unlink remote dir, drop MDT1 req reply, fail MDT1 ========================================================== 09:57:05 (1713535025) [ 4122.908230] Lustre: DEBUG MARKER: SKIP: replay-single test_81f needs >= 2 MDTs [ 4125.801580] Lustre: DEBUG MARKER: == replay-single test 81g: DNE: unlink remote dir, drop req reply, fail M0, then M1 ========================================================== 09:57:09 (1713535029) [ 4126.369289] Lustre: DEBUG MARKER: SKIP: replay-single test_81g needs >= 2 MDTs [ 4129.224915] Lustre: DEBUG MARKER: == replay-single test 81h: DNE: unlink remote dir, drop request reply, fail 2 MDTs ========================================================== 09:57:12 (1713535032) [ 4129.795665] Lustre: DEBUG MARKER: SKIP: replay-single test_81h needs >= 2 MDTs [ 4132.637399] Lustre: DEBUG MARKER: == replay-single test 84a: stale open during export disconnect ========================================================== 09:57:16 (1713535036) [ 4134.649843] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800a84f7800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 4134.657165] LustreError: Skipped 1 previous similar message [ 4140.488640] Lustre: DEBUG MARKER: == replay-single test 85a: check the cancellation of unused locks during recovery(IBITS) ========================================================== 09:57:23 (1713535043) [ 4159.841186] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4160.357730] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4166.191158] Lustre: DEBUG MARKER: == replay-single test 85b: check the cancellation of unused locks during recovery(EXTENT) ========================================================== 09:57:49 (1713535069) [ 4188.466147] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4189.071505] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4195.214015] Lustre: DEBUG MARKER: == replay-single test 86: umount server after clear nid_stats should not hit LBUG ========================================================== 09:58:18 (1713535098) [ 4195.618161] Lustre: Unmounted lustre-client [ 4201.249317] Lustre: Mounted lustre-client [ 4205.779839] Lustre: DEBUG MARKER: == replay-single test 87a: write replay ================== 09:58:29 (1713535109) [ 4207.502952] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 4225.391717] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4225.928870] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4231.935961] Lustre: DEBUG MARKER: == replay-single test 87b: write replay with changed data (checksum resend) ========================================================== 09:58:55 (1713535135) [ 4233.688284] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 4252.707796] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4253.113565] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4257.172727] Lustre: DEBUG MARKER: == replay-single test 88: MDS should not assign same objid to different files ========================================================== 09:59:20 (1713535160) [ 4258.205543] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 4259.176067] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4307.035954] Lustre: DEBUG MARKER: == replay-single test 89: no disk space leak on late ost connection ========================================================== 10:00:10 (1713535210) [ 4330.869609] Lustre: Unmounted lustre-client [ 4333.859557] Lustre: Mounted lustre-client [ 4333.864975] LustreError: 11-0: lustre-OST0000-osc-ffff8800aa159000: operation ost_connect to node 192.168.201.119@tcp failed: rc = -16 [ 4333.867272] LustreError: Skipped 3 previous similar messages [ 4405.050364] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 68 sec [ 4413.614211] Lustre: DEBUG MARKER: free_before: 7517184 free_after: 7517184 [ 4416.355213] Lustre: DEBUG MARKER: == replay-single test 90: lfs find identifies the missing striped file segments ========================================================== 10:01:59 (1713535319) [ 4418.992030] Lustre: lustre-OST0001-osc-ffff8800aa159000: Connection to lustre-OST0001 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 4418.995540] Lustre: Skipped 16 previous similar messages [ 4435.341032] Lustre: DEBUG MARKER: == replay-single test 93a: replay + reconnect ============ 10:02:18 (1713535338) [ 4466.046761] Lustre: 1823:0:(client.c:2340:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1713535353/real 1713535353] req@ffff8800a6d97480 x1796767427557376/t0(0) o400->lustre-OST0000-osc-ffff8800aa159000@192.168.201.119@tcp:28/4 lens 224/224 e 0 to 1 dl 1713535369 ref 1 fl Rpc:XQr/2c0/ffffffff rc 0/-1 job:'ptlrpcd_rcv.0' uid:0 gid:0 [ 4466.063872] Lustre: 1823:0:(client.c:2340:ptlrpc_expire_one_request()) Skipped 16 previous similar messages [ 4490.121259] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 4490.482465] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec [ 4494.603474] Lustre: DEBUG MARKER: == replay-single test 93b: replay + reconnect on mds ===== 10:03:18 (1713535398) [ 4512.144193] LustreError: 166-1: MGC192.168.201.119@tcp: Connection to MGS (at 192.168.201.119@tcp) was lost; in progress operations using this service will fail [ 4512.147035] LustreError: Skipped 5 previous similar messages [ 4512.149396] Lustre: Evicted from MGS (at 192.168.201.119@tcp) after server handle changed from 0xbcc0765484439867 to 0xbcc076548443a373 [ 4512.152079] Lustre: Skipped 5 previous similar messages [ 4512.153408] Lustre: MGC192.168.201.119@tcp: Connection restored to 192.168.201.119@tcp (at 192.168.201.119@tcp) [ 4512.155471] Lustre: Skipped 17 previous similar messages [ 4590.388902] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4590.983440] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4597.004712] Lustre: DEBUG MARKER: == replay-single test 100a: DNE: create striped dir, drop update rep from MDT1, fail MDT1 ========================================================== 10:05:00 (1713535500) [ 4597.572685] Lustre: DEBUG MARKER: SKIP: replay-single test_100a needs >= 2 MDTs [ 4600.525707] Lustre: DEBUG MARKER: == replay-single test 100b: DNE: create striped dir, fail MDT0 ========================================================== 10:05:03 (1713535503) [ 4601.063090] Lustre: DEBUG MARKER: SKIP: replay-single test_100b needs >= 2 MDTs [ 4603.968508] Lustre: DEBUG MARKER: == replay-single test 100c: DNE: create striped dir, abort_recov_mdt mds2 ========================================================== 10:05:07 (1713535507) [ 4604.530068] Lustre: DEBUG MARKER: SKIP: replay-single test_100c needs >= 2 MDTs [ 4607.492277] Lustre: DEBUG MARKER: == replay-single test 100d: DNE: cancel update logs upon recovery abort ========================================================== 10:05:10 (1713535510) [ 4608.030679] Lustre: DEBUG MARKER: SKIP: replay-single test_100d needs > 1 MDTs [ 4610.802057] Lustre: DEBUG MARKER: == replay-single test 100e: DNE: create striped dir on MDT0 and MDT1, fail MDT0, MDT1 ========================================================== 10:05:14 (1713535514) [ 4611.362365] Lustre: DEBUG MARKER: SKIP: replay-single test_100e needs >= 2 MDTs [ 4614.340681] Lustre: DEBUG MARKER: == replay-single test 101: Shouldn't reassign precreated objs to other files after recovery ========================================================== 10:05:17 (1713535517) [ 4615.862583] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4628.367950] LustreError: 167-0: lustre-MDT0000-mdc-ffff8800aa159000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. [ 4654.919462] Lustre: DEBUG MARKER: == replay-single test 102a: check resend (request lost) with multiple modify RPCs in flight ========================================================== 10:05:58 (1713535558) [ 4675.522437] Lustre: DEBUG MARKER: == replay-single test 102b: check resend (reply lost) with multiple modify RPCs in flight ========================================================== 10:06:19 (1713535579) [ 4695.508188] Lustre: DEBUG MARKER: == replay-single test 102c: check replay w/o reconstruction with multiple mod RPCs in flight ========================================================== 10:06:38 (1713535598) [ 4696.742015] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4714.654830] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4715.194180] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4720.837234] Lustre: DEBUG MARKER: == replay-single test 102d: check replay [ 4738.735007] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4739.297755] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4745.231490] Lustre: DEBUG MARKER: == replay-single test 103: Check otr_next_id overflow ==== 10:07:28 (1713535648) [ 4763.482913] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4764.015795] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4769.928951] Lustre: DEBUG MARKER: == replay-single test 110a: DNE: create striped dir, fail MDT1 ========================================================== 10:07:53 (1713535673) [ 4770.476717] Lustre: DEBUG MARKER: SKIP: replay-single test_110a needs >= 2 MDTs [ 4773.437696] Lustre: DEBUG MARKER: == replay-single test 110b: DNE: create striped dir, fail MDT1 and client ========================================================== 10:07:56 (1713535676) [ 4774.002378] Lustre: DEBUG MARKER: SKIP: replay-single test_110b needs >= 2 MDTs [ 4776.901793] Lustre: DEBUG MARKER: == replay-single test 110c: DNE: create striped dir, fail MDT2 ========================================================== 10:08:00 (1713535680) [ 4777.484204] Lustre: DEBUG MARKER: SKIP: replay-single test_110c needs >= 2 MDTs [ 4780.330404] Lustre: DEBUG MARKER: == replay-single test 110d: DNE: create striped dir, fail MDT2 and client ========================================================== 10:08:03 (1713535683) [ 4780.917682] Lustre: DEBUG MARKER: SKIP: replay-single test_110d needs >= 2 MDTs [ 4783.856881] Lustre: DEBUG MARKER: == replay-single test 110e: DNE: create striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 10:08:07 (1713535687) [ 4784.390101] Lustre: DEBUG MARKER: SKIP: replay-single test_110e needs >= 2 MDTs [ 4785.867116] Lustre: DEBUG MARKER: SKIP: replay-single test_110f skipping excluded test 110f [ 4787.876557] Lustre: DEBUG MARKER: == replay-single test 110g: DNE: create striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 10:08:11 (1713535691) [ 4788.443562] Lustre: DEBUG MARKER: SKIP: replay-single test_110g needs >= 2 MDTs [ 4791.320911] Lustre: DEBUG MARKER: == replay-single test 111a: DNE: unlink striped dir, fail MDT1 ========================================================== 10:08:14 (1713535694) [ 4791.912070] Lustre: DEBUG MARKER: SKIP: replay-single test_111a needs >= 2 MDTs [ 4794.461622] Lustre: DEBUG MARKER: == replay-single test 111b: DNE: unlink striped dir, fail MDT2 ========================================================== 10:08:17 (1713535697) [ 4795.011692] Lustre: DEBUG MARKER: SKIP: replay-single test_111b needs >= 2 MDTs [ 4797.735715] Lustre: DEBUG MARKER: == replay-single test 111c: DNE: unlink striped dir, uncommit on MDT1, fail client/MDT1/MDT2 ========================================================== 10:08:21 (1713535701) [ 4798.252358] Lustre: DEBUG MARKER: SKIP: replay-single test_111c needs >= 2 MDTs [ 4800.978999] Lustre: DEBUG MARKER: == replay-single test 111d: DNE: unlink striped dir, uncommit on MDT2, fail client/MDT1/MDT2 ========================================================== 10:08:24 (1713535704) [ 4801.408188] Lustre: DEBUG MARKER: SKIP: replay-single test_111d needs >= 2 MDTs [ 4803.618999] Lustre: DEBUG MARKER: == replay-single test 111e: DNE: unlink striped dir, uncommit on MDT2, fail MDT1/MDT2 ========================================================== 10:08:27 (1713535707) [ 4804.016615] Lustre: DEBUG MARKER: SKIP: replay-single test_111e needs >= 2 MDTs [ 4806.215801] Lustre: DEBUG MARKER: == replay-single test 111f: DNE: unlink striped dir, uncommit on MDT1, fail MDT1/MDT2 ========================================================== 10:08:29 (1713535709) [ 4806.606105] Lustre: DEBUG MARKER: SKIP: replay-single test_111f needs >= 2 MDTs [ 4808.668668] Lustre: DEBUG MARKER: == replay-single test 111g: DNE: unlink striped dir, fail MDT1/MDT2 ========================================================== 10:08:32 (1713535712) [ 4809.038674] Lustre: DEBUG MARKER: SKIP: replay-single test_111g needs >= 2 MDTs [ 4811.080931] Lustre: DEBUG MARKER: == replay-single test 112a: DNE: cross MDT rename, fail MDT1 ========================================================== 10:08:34 (1713535714) [ 4811.474229] Lustre: DEBUG MARKER: SKIP: replay-single test_112a needs >= 4 MDTs [ 4813.584646] Lustre: DEBUG MARKER: == replay-single test 112b: DNE: cross MDT rename, fail MDT2 ========================================================== 10:08:37 (1713535717) [ 4813.981760] Lustre: DEBUG MARKER: SKIP: replay-single test_112b needs >= 4 MDTs [ 4816.043461] Lustre: DEBUG MARKER: == replay-single test 112c: DNE: cross MDT rename, fail MDT3 ========================================================== 10:08:39 (1713535719) [ 4816.392329] Lustre: DEBUG MARKER: SKIP: replay-single test_112c needs >= 4 MDTs [ 4819.048258] Lustre: DEBUG MARKER: == replay-single test 112d: DNE: cross MDT rename, fail MDT4 ========================================================== 10:08:42 (1713535722) [ 4819.588981] Lustre: DEBUG MARKER: SKIP: replay-single test_112d needs >= 4 MDTs [ 4822.554378] Lustre: DEBUG MARKER: == replay-single test 112e: DNE: cross MDT rename, fail MDT1 and MDT2 ========================================================== 10:08:45 (1713535725) [ 4822.966666] Lustre: DEBUG MARKER: SKIP: replay-single test_112e needs >= 4 MDTs [ 4825.895750] Lustre: DEBUG MARKER: == replay-single test 112f: DNE: cross MDT rename, fail MDT1 and MDT3 ========================================================== 10:08:49 (1713535729) [ 4826.467355] Lustre: DEBUG MARKER: SKIP: replay-single test_112f needs >= 4 MDTs [ 4829.461796] Lustre: DEBUG MARKER: == replay-single test 112g: DNE: cross MDT rename, fail MDT1 and MDT4 ========================================================== 10:08:52 (1713535732) [ 4830.046637] Lustre: DEBUG MARKER: SKIP: replay-single test_112g needs >= 4 MDTs [ 4832.996418] Lustre: DEBUG MARKER: == replay-single test 112h: DNE: cross MDT rename, fail MDT2 and MDT3 ========================================================== 10:08:56 (1713535736) [ 4833.542254] Lustre: DEBUG MARKER: SKIP: replay-single test_112h needs >= 4 MDTs [ 4836.225994] Lustre: DEBUG MARKER: == replay-single test 112i: DNE: cross MDT rename, fail MDT2 and MDT4 ========================================================== 10:08:59 (1713535739) [ 4836.769339] Lustre: DEBUG MARKER: SKIP: replay-single test_112i needs >= 4 MDTs [ 4839.365813] Lustre: DEBUG MARKER: == replay-single test 112j: DNE: cross MDT rename, fail MDT3 and MDT4 ========================================================== 10:09:02 (1713535742) [ 4839.883084] Lustre: DEBUG MARKER: SKIP: replay-single test_112j needs >= 4 MDTs [ 4842.292435] Lustre: DEBUG MARKER: == replay-single test 112k: DNE: cross MDT rename, fail MDT1,MDT2,MDT3 ========================================================== 10:09:05 (1713535745) [ 4842.624812] Lustre: DEBUG MARKER: SKIP: replay-single test_112k needs >= 4 MDTs [ 4844.914356] Lustre: DEBUG MARKER: == replay-single test 112l: DNE: cross MDT rename, fail MDT1,MDT2,MDT4 ========================================================== 10:09:08 (1713535748) [ 4845.507985] Lustre: DEBUG MARKER: SKIP: replay-single test_112l needs >= 4 MDTs [ 4847.542871] Lustre: DEBUG MARKER: == replay-single test 112m: DNE: cross MDT rename, fail MDT1,MDT3,MDT4 ========================================================== 10:09:11 (1713535751) [ 4847.995390] Lustre: DEBUG MARKER: SKIP: replay-single test_112m needs >= 4 MDTs [ 4850.779873] Lustre: DEBUG MARKER: == replay-single test 112n: DNE: cross MDT rename, fail MDT2,MDT3,MDT4 ========================================================== 10:09:14 (1713535754) [ 4851.287476] Lustre: DEBUG MARKER: SKIP: replay-single test_112n needs >= 4 MDTs [ 4853.680381] Lustre: DEBUG MARKER: == replay-single test 115: failover for create/unlink striped directory ========================================================== 10:09:17 (1713535757) [ 4854.188709] Lustre: DEBUG MARKER: SKIP: replay-single test_115 needs >= 2 MDTs [ 4857.074590] Lustre: DEBUG MARKER: == replay-single test 116a: large update log master MDT recovery ========================================================== 10:09:20 (1713535760) [ 4857.494252] Lustre: DEBUG MARKER: SKIP: replay-single test_116a needs >= 2 MDTs [ 4860.010290] Lustre: DEBUG MARKER: == replay-single test 116b: large update log slave MDT recovery ========================================================== 10:09:23 (1713535763) [ 4860.397617] Lustre: DEBUG MARKER: SKIP: replay-single test_116b needs >= 2 MDTs [ 4862.270104] Lustre: DEBUG MARKER: == replay-single test 117: DNE: cross MDT unlink, fail MDT1 and MDT2 ========================================================== 10:09:25 (1713535765) [ 4862.605044] Lustre: DEBUG MARKER: SKIP: replay-single test_117 needs >= 4 MDTs [ 4864.477428] Lustre: DEBUG MARKER: == replay-single test 118: invalidate osp update will not cause update log corruption ========================================================== 10:09:28 (1713535768) [ 4864.818380] Lustre: DEBUG MARKER: SKIP: replay-single test_118 needs >= 2 MDTs [ 4866.652346] Lustre: DEBUG MARKER: == replay-single test 119: timeout of normal replay does not cause DNE replay fails ========================================================== 10:09:30 (1713535770) [ 4866.979514] Lustre: DEBUG MARKER: SKIP: replay-single test_119 needs >= 2 MDTs [ 4868.846364] Lustre: DEBUG MARKER: == replay-single test 120: DNE fail abort should stop both normal and DNE replay ========================================================== 10:09:32 (1713535772) [ 4869.215970] Lustre: DEBUG MARKER: SKIP: replay-single test_120 needs >= 2 MDTs [ 4871.095721] Lustre: DEBUG MARKER: == replay-single test 121: lock replay timed out and race ========================================================== 10:09:34 (1713535774) [ 4898.429237] Lustre: DEBUG MARKER: == replay-single test 130a: DoM file create (setstripe) replay ========================================================== 10:10:01 (1713535801) [ 4899.901459] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4916.761237] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) @@@ status 301, old was 0 req@ffff8800a8832a00 x1796767428907904/t433791696900(433791696900) o101->lustre-MDT0000-mdc-ffff8800aa159000@192.168.201.119@tcp:12/10 lens 536/608 e 0 to 0 dl 1713535836 ref 2 fl Interpret:RQU/204/0 rc 301/301 job:'lfs.0' uid:0 gid:0 [ 4916.775229] LustreError: 1823:0:(client.c:3294:ptlrpc_replay_interpret()) Skipped 153 previous similar messages [ 4917.698451] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4918.294760] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4924.430667] Lustre: DEBUG MARKER: == replay-single test 130b: DoM file create (inherited) replay ========================================================== 10:10:27 (1713535827) [ 4925.951511] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4943.628918] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4944.259100] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4950.401470] Lustre: DEBUG MARKER: == replay-single test 131a: DoM file write lock replay === 10:10:53 (1713535853) [ 4951.935593] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4969.848845] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4970.415697] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 4974.930375] Lustre: DEBUG MARKER: SKIP: replay-single test_131b skipping excluded test 131b [ 4976.970390] Lustre: DEBUG MARKER: == replay-single test 132a: PFL new component instantiate replay ========================================================== 10:11:20 (1713535880) [ 4978.505824] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 4994.987261] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 4995.432982] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5000.740812] Lustre: DEBUG MARKER: == replay-single test 133: check resend of ongoing requests for lwp during failover ========================================================== 10:11:44 (1713535904) [ 5001.197246] Lustre: DEBUG MARKER: SKIP: replay-single test_133 needs >= 2 MDTs [ 5003.507980] Lustre: DEBUG MARKER: == replay-single test 134: replay creation of a file created in a pool ========================================================== 10:11:47 (1713535907) [ 5009.072058] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-MDT0000 [ 5022.544460] Lustre: lustre-MDT0000-mdc-ffff8800aa159000: Connection to lustre-MDT0000 (at 192.168.201.119@tcp) was lost; in progress operations using this service will wait for recovery to complete [ 5022.552619] Lustre: Skipped 13 previous similar messages [ 5026.542254] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid [ 5027.377428] Lustre: DEBUG MARKER: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec [ 5039.760580] Lustre: DEBUG MARKER: == replay-single test 135: Server failure in lock replay phase ========================================================== 10:12:23 (1713535943) [ 5042.635595] Lustre: DEBUG MARKER: local REPLAY BARRIER on lustre-OST0000 [ 5051.112933] Lustre: DEBUG MARKER: oleg119-client.virtnet: executing wait_import_state_mount REPLAY_LOCKS osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid [ 5051.690176] Lustre: DEBUG MARKER: osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in REPLAY_LOCKS state after 0 sec [ 5052.226044] Lustre: DEBUG MARKER: replay-single test_135: @@@@@@ FAIL: Unexpected sync success